Gradle for Java Programmers
Gradle is the new(er) kid on the Java build automation block. You probably know that Gradle was originally developed as part of the Groovy language, for automating builds of Groovy projects. However, it's becoming more and more popular for Java projects while most of the documentation remains aimed at Groovy developers, creating a bit of a mismatch and something of a learning barrier.
A brief history of the Java Development Kit
Java was originally developed by Sun Microsystems, who at the time were the leading vendor of Unix operating
systems in the world. As such, the first release of Java was a bit Unix — and therefore
command-line — centric. A lot of this legacy still persists, and one of the keys to making
sense of the Java tool set is understanding its command-line origins. For instance, Unix pioneered
the concept of hierarchical file systems where each file was contained in a directory and each
directory was itself contained by another directory except for the one, single "root" directory.
Interacting with the computer involved typing names of commands along with their arguments.
When you wanted to run a program from a command line shell (as the Unix command-line interface
was called), you would type the name of a file which contained an executable application; the
shell was responsible for searching each directory in the user configurable PATH
for a file which
had the same name as the command and was marked as executable. It was up to the user to maintain
a colon-separated list of directories for the shell to search (this is still the case with modern
operating systems, incidentally, although the OS works hand-in-hand with installer programs to do
a better job of hiding this from you).
When James Gosling was developing Java around the mid-90's,
the development environment that he would have likely been most familiar with
was the Sun C compiler toolchain. In that environment, if you wanted
to write a C program, you'd write some C code using an editor like vi
, save it, run
the C compiler cc
with the source code as input and it would produce an object file. You would then run the ld
linker program to link that object file with other object files and library files to produce an
actual executable program. Java naturally followed in the tradition of C and C++: it was originally envisioned as
sort of a "better" C++.
The first release of Java had a toolchain that operated in a similar way (in fact, so does the most recent
release of Java, although again, if you use an IDE, the IDE does a pretty good job of hiding
the details from you). If you wanted to write a new Java program, you'd run vi
, type some
Java source code into a .java
file, save it, and run the javac
compiler to produce the .class
file which was executable (by the Java Virtual Machine). However, one of Gosling's stated aims
was to "fix" some of what he considered shortcomings in the dominant object-oriented programming
language of the time, C++. For instance, whereas C++ had a "flat" namespace, which had each source
file on the same logical level, Java classes
are almost always grouped together into unique packages. Packages themselves contained
sub-packages, and to keep them grouped together logically, the javac
compiler would
emit a new directory for each level of the package hierarchy and generate the .class
files underneath the last. This decision to emulate the packaging structure analogously to the
hierarchical directory structure ended up impacting Java's support for build automation quite
a bit. (The Java specification states that this is not a
hard-and-fast requirement in the implementation of Java, but to date no implementation has broken with
tradition and represented groupings of classes into packages any other way). When one class
imports/depends on another class, that dependency is resolved by searching the Java classpath;
analogous to the shell's search path.
Java was originally imagined as a programming language for TV set-top boxes, such as the Tivo; when the
internet really started to take off in popularity, though, Sun repurposed the language as a
programming/hosting environment for what they called applets.
These applets were envisioned as mostly
self-contained little interactive subwindows inside web pages that would be executed by the
browser, on the client's machine.
Since applets were expected to be implemented as single, standalone .class
files,
the first release of Java didn't pay much attention to how dependencies would be distributed.
However, recognizing that some applets
might require multiple cooperating classes, the Java specification allowed classpaths to include
not just directory names, but pointers to compressed folders - specifically .zip files. As Java
grew in popularity, it became more common for reusable collections of .class files to be distributed
as compressed .zip collections - in fact, the earliest database connectivity library for Java was
Oracle's, and it was just distributed in an archive named classes.zip
(apparently Oracle thought
they would be the only people distributing classes this way, so there was no need to indicate
where it came from in the name).
As the custodians of Java started watching their creation take off in popularity, they realized
that a standardized, Java-specific means of distributing reusable libraries was necessary, so they
created the JAR standard. Well, "created" is a stretch - a JAR file is actually just a .zip file
with one special subdirectory named "META-INF" (special subdirectory optional). Even to this day,
.zip and .jar files are interchangeable; software that can read one can implicitly read the other.
However, the standardized naming strategy — .jar — helped developers keep track of
what each one was for. A special utility — the jar
command — was
distributed with the Java Development Kit for creating .jar files from collections of (related,
one would hope) .class files.
Although applets never really caught on, Java found new life on the server side when, with JDK 1.2, Sun standardized the Java servlet interface and the Web Application Resource (WAR) structure. One of the major contributions of the WAR structure was the introduction of the WEB-INF/lib subdirectory; the author could dump all of the jar files that the application was dependent on and they would all be automatically added to a "virtual" classpath that was specific to that individual WAR file.
Build Automation in Java
Earlier I mentioned that C and C++ programs were compiled via invocations of the C
compiler cc
. Although this is true, it's only part of the story. Even as long ago
as the 70's, C-based programs grew to encompass many source code files and usually quite a few
shared, reusable libraries. Developers observed that invoking cc
on each individual
source code file was tedious, but making a blanket call to recompile all source code files
was a big time waster. The make
utility was designed to determine which C source
code files had been changed by comparing their timestamps to their respective object files and
recompiling and re-linking just those files.
Make being such a huge timesaver for C developers, early Java developers (including me) started
to investigate re-using it to automate Java builds. One big sticking point in reusing Make for
Java was the packaging structure: the java source files were typically stored in subdirectories
that matched their package naming structure because the Java compiler would always generate the
class files that way. That made declaring dependencies difficult; rather than saying that
the file x.o depended on the file x.c in the same directory, you wanted to say
that target/d1/d2/d3/x.class depended on src/d1/d2/d3/x.java, and
the make
utility didn't have a good syntax for that — at least not when you
had more than a handful of packages. It was even worse when classes in one package imported
classes in another - you had to order your Makefile correctly to ensure that the classes in the providing
package were compiled before the dependent package. Make didn't have any way to determine, nor
offer any syntax for declaring, that a set of .java source files relied on others. If all of the
code compiled correctly, you probably wanted to package it up into a .jar file — but again,
because Make couldn't properly take Java's packaging structure into account, it was difficult to
instruct Make when exactly when a .jar needed to be re-created.
Although a lot of us muddled through, adding lots of Makefile code and keeping track of
dependencies by hand, it got to the point where you started to wonder what benefit Make was
even providing if you had to perform so much accounting yourself. Around 1999, a Sun developer
named James Davidson finally got fed up with Make's shortcomings with regards to Java and created
a new Java-specific build system that he named ant
(Another Neat Tool). Whereas
Makefiles were structured as targets and sources along with instructions on how to generate one
from the other, Ant builds were written as XML (which was all the rage back then) that declared
where the Java source files could be found; the Ant tool was smart enough to take into account
the packaging structure and determine when one .class file was "stale" and in need of a rebuild.
It also had some fairly Java-specific semantics for describing classpaths for dependency
resolution.
Advances in Java source management
While all of this was going on, Martin Fowler and Kent Beck popularized the practice of writing code which would test other code for correctness with the release of their JUnit framework. Automated unit testing, as good an idea as it was, imposed some specific constraints on how you structured your code; if the unit tests were in a separate package from the code that you were testing, then everything that you tested had to be marked public so that the unit test code could see it. Conversely, if you put the unit tests in the same package as the code under test, then that code would end up getting jarred up and deployed with the rest of your "production" code, which was also undesirable (and potentially a security issue too). The trick most people came up with to get around that was to declare parallel packaging hierarchies so that the production to-be-deployed code was stored in one subdirectory and the test code was stored in a different subdirectory, but the build process would merge them into the same package at run time. This did mean that the build script had to keep track of which source files served which purpose, and only package up the correct ones at deployment time. When individual developers turned out to be disappointingly skeptical of the value of writing these tests, code coverage tools like Cobertura came along to measure how much of the code was actually being exercised by the unit test code. Running this code quality tool (along with any others like Sonarqube) was yet another build tool concern.
It was and is typical (although not strictly required) to use a unit testing framework like JUnit or TestNG to manage the starting and stopping of these tests and their aggregate results; these frameworks themselves are distributed as JAR files. It's also typical to use yet another library to "mock" or "stub" out code that you don't want to run during a test, like one that adds a new user to a database. These created yet more dependencies that were required only at certain points in the build cycle.
The Java Development Kit also shipped with a useful tool called javadoc
since its
very first release - it would read through the source code, find specially marked comments, and
format them into a cross-referenced HTML summary of the project's contents. Although my
experience with getting other developers to provide usable and meaningful javadoc comments has
been hit and miss, most open source projects, as well as Java itself, usually did a great job of
keeping their javadoc comments up to date in their source code. However, running the javadoc
tool and subsequently deploying the resultant HTML summary was yet one more step in the overall
build process. Ant build scripts started to get very complex. Typically they involved (at least)
six steps:
- Compile all of the source code, using the "main" classpath
- Compile all of the test code, using the "test" classpath
- Run the unit tests, usually using a different classpath
- Run the code coverage tool
- Run javadoc to generate the documentation
- Run the jar command to package up the main classes
Each step depended on the previous step and the process should be halted if one failed; each had inputs and outputs whose presence would determine whether the step needed to be run at all. Ant build scripts were flexible enough to declare all of these steps and their interrelationships, but every project had to do so independently.
Advances in dependency management
By far, though, the biggest hassle in compiling and running Java was managing classpaths. By 2004, most users of Java were using it not to build and maintain applets nor even desktop applications, but "web applications" that were server-side infrastructures that provided useful functionality to a browser. Such web applications were virtually never self-contained; they had external dependencies on Java's servlet libraries, some JDBC drivers, the Jakarta commons project, a logging infrastructure, XML parsers, Object-Relational Mapping tools: it sometimes seems as though "enterprise" Java developers are in competition with one another to write an application that incorporates every other Java project ever written.
Managing these in Ant was an arduous task. For one thing, you had to have them handy to tell the
compiler where to resolve them from. It was common to check them all into version control so that
the build script could refer to them at build time. A developer named Jason van Zyl got tired of struggling
with distributing dependencies: not only did projects depend on external libraries,
these dependencies had dependencies of their own - just getting the right versions downloaded
and assembled became a significant build task. He put together yet another build tool that he
named Maven
. Maven's real "killer feature" was the idea to set
up a central repository where EVERY
Java library could be uploaded and versioned along with metadata that described its dependencies;
the Maven build scripts just listed the canonical names of their dependencies and their versions
and the Maven build software took care of downloading the right JAR files along with the JAR
files that those JAR files depended on.
However, a less-appreciated feature of Maven was that it standardized a lifecycle for Java project builds. Ant waited for you to tell it what to do: even though it was a tool for compiling Java source code, it would wait for you to tell it that you wanted to compile Java source code. Maven, on the other hand, observed that 99% of all Java project builds included, at a minimum, compiling Java source code, running unit tests, and packaging up the .class files - specifically in that order, and stopping if any of those tasks failed. Further, a very large percentage of Java projects went through a source-code generation stage (if they were using an ORM or an XML binding library), a set of integration tests, and very often a deployment to an application container like Tomcat. Maven standardized not only the names of those steps (which they called phases), but where those steps should look to find their inputs and where they should put their outputs.
Maven's opinionated view of where files should reside could be both a blessing and a curse, though. If you accidentally put your test classes under src/test/com/... instead of src/test/java/com/... it would silently report that all of the unit tests passed because it didn't find any (and therefore, by definition, all of the unit tests that it found passed). However, as long as you memorized and adhered to the Maven conventions, and were following a fairly standard build process, you could accomplish a lot with very little project declaration. You had to create a pom.xml, and you had to include a bit of boilerplate in it, but for the most part you could get away with nothing project-specific other than the declaration of your dependencies.
The Groovy Programming Language
Java was developed as an easier, safer to use alternative to the hard, dangerous to use C++ programming language. However, roughly ten years after its inception, developers suspected that Java itself wasn't quite as easy and safe to use as it otherwise could be. The Groovy project aimed to take a few pages from the functional programming playbook and develop a more dynamic programming language that was Java-like and ran on the standard JVM, but supported more meta-programming concepts.
One of the most fundamental departures of Groovy from Java was the introduction of
Closures
: top-level, first-class functions. Although Java 8 now supports
lambda expressions which are similar (but slightly different in terms of variable
scoping), Groovy supported closures in a standard JVM as long ago as 2007. Groovy objects are
also inherently untyped, as opposed to Java's strict static typing requirement. This means
that a Groovy function can be declared like:
x = {
System.out.println("Hello, World");
}
x();
Listing 1: first-class function
And, of course, functions can be passed around as arguments to other functions:
def y(arg) {
arg();
}
y(x);
Listing 2: Passing functions around as arguments
Note in listing 2 that I had to use def
to indicate that y
was a function.
Alternatively, I could use a more JDK 8 lambda-like expression to declare the argument:
y = { arg ->
arg();
}
Listing 3: Declaring arguments in Closures
Groovy also allows you to omit the trailing semicolons (and most Groovy programmers do, so get
used to seeing it) and automatically imports the System.out
namespace, so the
previous listings can actually be simplified to the more Groovy-like:
x = {
System.out.println("Hello, World")
}
y = { arg ->
arg()
}
y(x)
Listing 4: omitting redundancies
In fact, the println
can even be simplified further: you can omit the parentheses
when calling a function in Groovy and invoke the println like: println 'Hello, World'
(note that single quotes work as well as double quotes to denote strings in Groovy, also unlike
Java - in fact, Groovy treats double-quoted strings slightly differently than single-quoted, so
you should probably prefer single-quoted strings unless you're using the advanced GString features).
Similarly, y(x)
can be "simplified" to y x
. However, calls
to no-argument functions have to retain their parentheses, so arg()
has to remain
the way it is.
Groovy offers a simplified way to construct every Java developers favorite two classes, the
ArrayList and the HashMap: []
. A declaration such as [1,2,3]
creates
a new instance of java.util.ArrayList
, and a declaration such as ['a': 1, 'b': 2, 'c': 3]
creates an instance of java.util.LinkedHashMap
. In fact, you can do some, er,
"unexpected" things with Map
instances in Groovy - if you omit the quotation marks
on the key values in the map declaration, Groovy can "magically" coerce the keys to strings! This
means that this declaration:
is identical to this one:
m = ['a': 1, 'b': 2, 'c': 3]
m = [a: 1, b: 2, c: 3]
Note that this only works on the keys - if you try to do:
You'll get a syntax error
m = [a: x, b: y, c: z]
MissingPropertyException
.
You can also refer to Map
contents using bracket notation, like array indexes - so
that m['a']
would yield the integer 1. You can also refer to map contents
as if they were properties: m.a
is equivalent to m['a']
! Note that
m[a]
would fail. (Also note that you can't refer to array list member as properties;
you have to say a[1]
, never a.1
. You can, however, append to a list
using the <<
overload as in a << 5
; there's no equivalent
for maps).
On the topic of property accesses, Groovy will also automatically expand a property reference such
as a.prop
to a.getProperty()
following the so-called "JavaBean"
conventions for getters and setters. This works on Java properties as well as Groovy properties,
so if you declared a = [1,2,3]
a call to a.class.name
would automatically
expand to a.getClass().getName()
and return java.util.ArrayList
. Watch
out, though - this doesn't work on maps. Why? Because maps are collections of properties
themselves - if you tried m.class
, Groovy would look for a map member named 'class',
and return null. In the case of java Maps, you have to write out getClass()
.
There is, of course, a lot more to Groovy, but Closures and the map/list syntax are two of the most visible differences between Groovy and Java.
The Gradle build system
Enter Gradle. Gradle was designed in response to the perceived flaws in Maven (which was
designed in response to the perceived flaws in Ant which was designed in response to the perceived
flaws in Make), one of which was that it was written in Java, not Groovy. If you peek at a
Gradle build file, it kind of looks like a Maven POM file with
a simplified syntax; XML markup is replaced with more Java-ish curly braces and space-delimited
keywords. When I first saw a Gradle build file, I first assumed that it was a Maven POM written
out in JSON syntax (since JSON is all the rage these days), but on closer inspection, it was clear
that what I was looking at wasn't valid JSON, either. In fact, though, what you're looking at is
not a new syntax at all. A Gradle build file is a Groovy source code file, typically called
build.gradle
, which defines a Project
instance. Every top-level invocation in the build.gradle file
is a Groovy statement; the delegate is an instance of
Project.
To underscore that a Gradle build file is a Groovy source-code file, observe that you can actually define and run somewhat arbitrary code in a gradle build. For instance, here's a Fibonacci number computation:
def fib(n) {
if (n < 2) {
return 1
} else {
return fib(n-2) + fib(n-1)
}
}
println fib(7)
Listing 5: build.gradle file that computes a Fibonacci number
If you have gradle installed, you can save this file and run gradle
and see the
result:
$ gradle
21
However, you can't put just any Groovy code into a Gradle build file. This, for instance, will
fail:
x = fib(7)
Why? It has to do with a Groovy-ism; what I'm actually telling the Groovy runtime to do here
is to call "setX" on the delegate of the script. Gradle sets the delegate to an instance of
org.gradle.api.Project. Any previously undeclared property or method call is evaluated against this implicit instance
of Project
. You can see some of these properties in the build.gradle
sample of listing 6:
println name;
println description;
println group;
println path;
println version;
Listing 6: build.gradle script that prints out some of the implicit properties of the Project instance
In addition to exposing getters this way, Project
exposes functions that can be invoked, one
of the most important of which is task
. Although as I demonstrated above that you can, technically, use Gradle
to preform arbitrary computational tasks like computing Fibonacci numbers, that's obviously
not what it's for — it's for compiling and building software. A Task
, then,
declares one of the steps involved in building software. Invoking the task
method of the
Project
delegate describes a new task. The newly declared
Task instance has
two particularly interesting methods: doFirst
and doLast
which accept
Groovy Closures
that describe what to do first
and last when executing the task, respectively. Listing 7 illustrates the declaration of a
"dummy" task.
task 'testTask'
testTask.doFirst({
println('This happens first')
})
testTask.doLast({
println('This happens last')
})
Listing 7: build.gradle script that emits a couple of notations
Remembering that Groovy allows you to call
functions without parentheses, so task 'testTask'
is the same as
task('testTask')
. Notice that the invocation to task 'testTask'
created
a new property in the Project
named testTask
, which I can now invoke
methods on.
You can see this new task by running:
$ gradle -q tasks
------------------------------------------------------------
All tasks runnable from root project
------------------------------------------------------------
Build Setup tasks
-----------------
init - Initializes a new Gradle build. [incubating]
wrapper - Generates Gradle wrapper files. [incubating]
Help tasks
----------
components - Displays the components produced by root project 'tmp'. [incubating]
dependencies - Displays all dependencies declared in root project 'tmp'.
dependencyInsight - Displays the insight into a specific dependency in root project 'tmp'.
help - Displays a help message.
projects - Displays the sub-projects of root project 'tmp'.
properties - Displays the properties of root project 'tmp'.
tasks - Displays the tasks runnable from root project 'tmp'.
Other tasks
-----------
testTask
And invoke it by naming it as the first parameter to the gradle invocation:
$ gradle -q testTask
This happens first
This happens last
Even more syntactically improbably, you can omit the quotation marks on 'myTask'
and
it will still work correctly:
Gradle makes this work by overriding the
task myTask
myTask.doLast { println 'doing this last' }
getProperty
and invokeMethod
objects of the script's MetaClass
.
You'll probably see another syntax in most Gradle build scripts:
myTask << { println 'doing this last' }
Gradle overrides the << operator to invoke the doLast
function of the task.
However, this is deprecated and will be removed in the next Gradle release, so don't get in the
habit of using it.
tasks have lists of inputs and lists of outputs. If any of the outputs is older than any of the inputs, the task is assumed to be out of date, and its actions are run. This is probably about the only thing that Make, Ant, Maven and Gradle all have in common.
By default, tasks are independent, but you can declare dependencies between them, Ant-style,
using the syntax:
task task1 {
doLast {
println 'task 1 do last'
}
}
task task2(dependsOn: task1) {
doLast {
println 'task 1 do last'
}
}
This causes task1 to be run (if its outputs are older than its inputs) before task 2.
So far, you've seen how to define your own tasks in Gradle build files, but they don't do
anything useful except print that they have been run. Since Gradle is a build tool, you
probably (reasonably) expect it to actually get around to building something.
Gradle is somewhat interesting in comparison with the other Java build tools in that it will
run, and even do something semi-useful, without any build script. This isn't possible with Make,
Ant or Maven - if they don't find their respective Makefile, build.xml or pom.xml directions,
they'll fail with an error message. Gradle, on the other hand, can sort of bootstrap itself by
creating a sample build.gradle file or creating a wrapper for itself. The Project
instance that is configured in the build script is capable of quite a bit of processing on its
own, with tasks for copying, creating, deleting and moving files and directories as well as
executing arbitrary shell commands.
However, to do anything arguably useful — and definitely to take advantage of Gradle the way it's meant to be used — you need to, at a minimum, apply at least one plugin to your project. You apply a plugin by adding a line of the form:
apply([plugin: 'name'])
to your build.gradle file. This invokes the apply
method of the Project delegate
with a new HashMap instance with a single property: plugin, whose value is 'name'. However, since
the build.gradle file is a Groovy source file, it permits a simplified syntax:
apply plugin: 'name'
Which is how most people invoke it, or the newer preferred syntax:
plugins {
id 'name'
}
Plugins, in general, declare new tasks. So, after adding the line
apply plugin: 'java'
to the build.gradle file, the tasks list expands quite a bit. In general, when you add
plugins (and there are quite a few), plugins make tasks available to your build. If you run
gradle build
, Gradle finds the following dependent tasks and executes each in order:
compileJava |
Look for a directory named src/main/java . If found, compile every .java source
file under it. Locate the results under a directory named "build/classes"; create it if it isn't
already there.
|
processResources |
Look for a directory named src/main/resources . If found, copy every file found
into build/resources/main (note that it deliberately inverts the directory hierarchy). Directory
hierarchies are preserved. What, you may ask, is this for? Read on.
|
classes | "placeholder" dependency to run the previous two. |
jar |
Archive the contents of build/classes/main and build/resources/main into a new .jar file named,
by default, current-directory.jar. This way, any file contained in src/main/resources
is available on the Java classpath via a call to getClass().getResource() . This
is a common way to distribute things like, say, log4j.xml configurations.
|
assemble | placeholder dependency on jar |
compileTestJava |
Look for a directory called "src/test/java" and compile everything found under it. Put the
resulting .class files into build/classes/test. Notice that this comes after the
jar and assemble tasks, so the test code does not end up in the .jar
file.
|
processTestResources | copy the contents of (if found) src/test/resources into build/resources/test. |
testClasses | Another placeholder task to declare the previous two. |
test | Compile all of the main code, test code, and run it using the selected testing framework. |
check | Place holder task to invoke test and its dependencies. |
build | Yet another placeholder to run all of the previous tasks. |
You'll notice a recurring theme here - most tasks look for something under a directory named
src
, process it in some way, and put the results under build
. You
can change these defaults (in fact, everything about this process is configurable), but most
people accept the defaults because a) there's nothing wrong with them and b) it's a lot more
obvious to somebody else who's familiar with the conventions what you're doing if you leave the
defaults alone. Gradle even allows you to list multiple input directories by invoking
sourceSets
with a list of directories.
It's worth noting that you can call each of these tasks individually if you want - in fact,
if you call gradle processResources
, gradle will copy from src/main/resources to
build/resources/main, but it will not run compileJava; processResources doesn't depend on
compileJava, but classes
does.
If you run gradle tasks
from the command line with the Java plugin included in your
Gradle build file, you'll notice that it only lists abut half as many tasks as I showed above.
Gradle (and its java plugin) are smart enough to divide their tasks into high-level
and low-level tasks; by default, all you see in the tasks listing are the high-level tasks that
you're likely to spend most of your time invoking. If you want to see them all, though, you
can run gradle tasks --all
. It still gives you a sense of which ones are the "low-level"
tasks, though, by indenting them. You may wonder - why all the "placeholder" tasks that
only seem to exist to invoke other tasks, but don't do any actual work themselves? These create
a lifecycle for the Java build which other plugins can hook into — Gradle plugins
are designed to themselves be extensible.
So, you can actually do quite a bit with this one line build.gradle file, but in almost any
useful case, you're going to have to declare at least a handful of dependencies. As you could
probably guess, dependencies are declared by invoking the dependencies
method of
the Project
delegate. dependencies
accepts as a single parameter
a closure (Groovy's term for an anonymous function) which will be invoked with
DependencyHandler
as the delegate. (Gradle's documentation refers to this and other methods that accept closures
as script blocks
).
However, dependencies have to come from somewhere. If you're familiar with Maven, you're probably
used to declaring repository locations in your settings.xml file; with Gradle, you declare them
individually in each build file using (surprise, surprise) an invocation of
repositories
on Project
. Again, it accepts a closure (an anonymous
function) which in turn calls exposed members on the
RepositoryHandler
class; one of these is the simple, parameterless mavenCentral
method which instructs
Gradle to look under https://repo1.maven.org/maven2
to resolve dependencies.
Listing 8 illustrates a simple build.gradle file that declares a repository and some dependencies.
plugins {
id 'java'
}
repositories {
mavenCentral()
}
dependencies {
compile 'org.apache.commons:commons-lang3:3.0.1'
}
Listing 8: build.gradle with some dependencies
Now, if you run this, you'll see that these dependencies are resolved and downloaded whenever
the compileJava
task is run:
:compileJava
Download https://repo1.maven.org/maven2/org/apache/commons/commons-lang3/3.0.1/commons-lang3-3.0.1.pom
Download https://repo1.maven.org/maven2/org/apache/commons/commons-parent/21/commons-parent-21.pom
Download https://repo1.maven.org/maven2/org/apache/commons/commons-lang3/3.0.1/commons-lang3-3.0.1.jar
As you can see, the POM and JAR files are downloaded... but to where? You won't find them
anywhere underneath your build directory. The fact that they came from mavenCentral
(and referenced POMs, no less!) would suggest that they were downloaded to
~/.m2/repository
but you can verify that they don't download there, either. This
is correctly regarded as an "undocumented" implementation detail (as in, don't rely on this
undocumented behavior not to change), but an important one - Gradle will cache it's dependencies
under something like ~/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-lang3/3.0.1/57df52e2acfbb4de59a69972f4c55af09f06a2df/commons-lang3-3.0.1.pom
As I mentioned above, plugins add tasks to project, thus making them available to run; they also add arbitrary properties to projects. In fact, the Java plugin is what made the "repositories" and "dependencies" methods available for invocation on the root project; if you remove it, you'll get an error even trying to list the tasks:
> Could not find method compile() for arguments [{group=org.apache.commons, name=commons-lang3, version=3.0.1}] on root project 'tmp'.
In general, gradle builds include the 'java' plugin, a repositories declaration, and a
dependencies list - this is so common that the helper command gradle init
creates
a standardized template for you that includes exactly those three sections.
Notice in listing 8 that I invoked the compile
method of the
DependencyHandler delegate. This is what Gradle calls a configuration
and corresponds roughly to a classpath. In particular, this is the classpath that should be
used when compiling the "main" source code under src/main/java
. There are quite
a few configurations defined by the Java plugin including testCompile
for declaring
the contents of the test code's dependencies and runtime
to declare dependencies that aren't required to compile but are required to run. JDBC
drivers are common examples.
Given the inherent capabilities of the Java plugin and the build-by-convention standards that
Maven set, you actually don't need to do much more than list your preferred repository and
your dependencies in order to use Gradle to automate the build of a Java-based project. Listing
8 is a complete build script that will build your code, test it (if you include the dependency
testCompile 'junit:junit:4.1'
), and jar it up into build/libs
as long
as you follow the Maven convention of locating your source code under src/main/java
and src/test/java
. However, it's a good bet that a lot of Java developers who are looking
at Gradle are looking at it to manage web application builds. As you probably guessed, Gradle
also has a dedicated war
plugin. The war plugin actually only adds a single
task, war
, which compiles everything and packages it, including dependencies, into
a standard .war
file. The plugin also introduces some properties and configurations;
one of the new properties is $webAppDir
which can be used inside of tasks to refer to the
location of the web resources such as jsp
files.
So far, most of the capabilities I've described (a small subset of Gradle) are still available
in Maven, if you don't mind dealing with its finicky XML syntax. However, Gradle really shines
when it comes to managing multiple interdependent projects.
Gradle recognizes that most Java projects are actually collections of sub-projects or
components (just as Java itself recognizes that most Java components are collections of packages).
The concept of defining a parent build with multiple sub-builds is a first-class concept in
Gradle: the standard settings.gradle
file lists the sub-projects. In the standard
multi-module view of Gradle, the top-level build script sets all of the common settings via an
invocation of allProjects
.
In fact, Gradle is so "parent project/child project" oriented that the Gradle command-line
interface will automatically search through all parent directories up to the root to find
any containing parent projects. You can even declare dependencies among projects by declaring
the artifacts
produced by one and listing that artifact as a dependency
in another; Gradle will recognize the dependency at build time and ensure that your projects
are built in the correct order.
If you inherited a Gradle-based project, you may notice a slight oddity - in addition to the
standard files build.gradle
and settings.gradle
in the root directory,
you'll also probably see an executable named gradlew
. This is a particular useful
Gradle addition to build automation: the gradlew
script (short for gradle
wrapper) actually downloads and install a specific version of Gradle — the one
that the developer has tested against — and runs the build using that. So, technically,
you don't even need to install the Gradle core in order to use somebody else's build: the Gradle
wrapper will do it for you and even make sure you're using the right version. You, of course,
should pay this forward and add the Gradle wrapper to each of your Gradle-based projects —
you can do this with the standard task gradle wrapper
.
Add a comment:
Completely off-topic or spam comments will be removed at the discretion of the moderator.
You may preserve formatting (e.g. a code sample) by indenting with four spaces preceding the formatted line(s)