Netflix has open sourced many of our Gradle plugins under the name Nebula. These plugins are there to lend our expertise and experience to building responsible projects, internally and externally. This talk will cover some of the ones we've published, why we want to share these with the community, how we tested and published them, and most importantly how you can contribute back to them.
Nebula started off as a set of strong opinions to make Gradle simple to use for our developers. But we quickly learned that we could use the same assumptions on our open source projects and on other Gradle plugins to make them easy to build, test and deploy. By standardizing plugin development, we've lowered the barrier to generating them, allowing us to keep our build modular and composable.
This tutorial provides a detailed hands-on experience to bring up the necessary components to run the @NetflixOSS stack. This includes priming your Amazon account (IAM Profiles, Security Groups, etc) and setting up Asgard and Aminator. Together they can be used, time permitting, to launch many more @NetflixOSS services, like Edda, Eureka and Ice.
Configuration As Code - Adoption of the Job DSL Plugin at NetflixJustin Ryan
The Jenkins Job DSL plugin allows programmers to express job configurations as code. Learn about the benefits, from the obvious (store your configurations in the SCM of your choice) to the not-so-obvious (focus on intent, instead of succumbing to the distraction of multiple, complex job configuration options). We will share our experience adopting the plugin over the past year to create and maintain more complex job pipelines at Netflix.
Presentation delivered by Darran Lofthouse, Principal Software Engineer, Red Hat & Kabir Khan, Principal Software Engineer, Red Hat, during London JBoss User Group event on the 21st of May 2014.
Google App Engine (GAE) is a popular PaaS offering. Where its scalable and reliable environment is hidden behind a custom API. This makes GAE apps hard to port over to other non-GAE environments.
But what if one could implement such similar environment? And you could simply move your GAE application’s .war file to this new environment and it would just work?
After all, at the end it’s all about the API, plus scalable and reliable services.
JBoss CapeDwarf project aims at making this a reality. This presentation will provide a glimpse into what it takes to implement something as GAE, ranging from runtime integration with JBoss Application Server, actual services implementation to last but not least, automated heavy testing.
This tutorial provides a detailed hands-on experience to bring up the necessary components to run the @NetflixOSS stack. This includes priming your Amazon account (IAM Profiles, Security Groups, etc) and setting up Asgard and Aminator. Together they can be used, time permitting, to launch many more @NetflixOSS services, like Edda, Eureka and Ice.
Configuration As Code - Adoption of the Job DSL Plugin at NetflixJustin Ryan
The Jenkins Job DSL plugin allows programmers to express job configurations as code. Learn about the benefits, from the obvious (store your configurations in the SCM of your choice) to the not-so-obvious (focus on intent, instead of succumbing to the distraction of multiple, complex job configuration options). We will share our experience adopting the plugin over the past year to create and maintain more complex job pipelines at Netflix.
Presentation delivered by Darran Lofthouse, Principal Software Engineer, Red Hat & Kabir Khan, Principal Software Engineer, Red Hat, during London JBoss User Group event on the 21st of May 2014.
Google App Engine (GAE) is a popular PaaS offering. Where its scalable and reliable environment is hidden behind a custom API. This makes GAE apps hard to port over to other non-GAE environments.
But what if one could implement such similar environment? And you could simply move your GAE application’s .war file to this new environment and it would just work?
After all, at the end it’s all about the API, plus scalable and reliable services.
JBoss CapeDwarf project aims at making this a reality. This presentation will provide a glimpse into what it takes to implement something as GAE, ranging from runtime integration with JBoss Application Server, actual services implementation to last but not least, automated heavy testing.
Join us for this interactive event and get your hands dirty with some WildFly 9 hacking!
Our host Kabir Khan will explain how you can contribute to the WildFly project at many different levels, from properly reporting bugs in the forums and issue tracker, to actually being able to submit a pull request.
During this interactive event you will have a chance to play with WildFly 9 and try some of the following:
• Find a JIRA you want to work on.
• See how to check-out the code and setup your IDE.
• Build WildFly
• Code walkthrough - code organisation, jboss-modules etc.
• Debug something from a stack trace in a JIRA issue to nail down the problem.
• Try the testsuite
• And more!
Github Actions enables you to create custom software development lifecycle workflows directly in your Github repository. These workflows are made out of different tasks so-called actions that can be run automatically on certain events.
My talk from Dockercon EU in Amsterdam, Dec 2014. Original abstract:
The ModCloth Platform team has been building a Docker-based continuous delivery pipeline. This presentation discusses that project and how we build containers at ModCloth. The topics include what goes into our containers; how to optimize builds to use the Docker build cache effectively; useful development workflows (including using fig); and the key decision to treat containers as processes instead of mini-vms. This presentation will also discuss (and demo!) the workflow we’ve adopted for building containers and how we’ve integrated container builds with our CI.
Володимир Дубенко "Node.js for desktop development (based on Electron library)"Fwdays
How to re-use Web app development skills for desktop development, how to not depend on UI frameworks (like XAML in WPF), how to easily create a cross-platform application, how to stay on top technologies and do not be outdated, how to remove border between desktop and web development.
When Infrastructure becomes so reliable it's boring, apps can shine. Here's how to learn more about configuration management and application deployment for OpenStack clouds like Cisco Metacloud. Delivered at the OpenStack Summit in Barcelona, October 2016.
Automated Infrastructure and Application ManagementClark Everetts
Managing application infrastructure is an error prone, tedious, and often manual process leading to late hours spent troubleshooting self-inflicted oversights. Clark will introduce an open source Chef cookbook automating many steps, which utilizes a server side SDK to painlessly deploy PHP applications, and also show how the process can be managed leveraging Zend Server. Attendees will walk away with a complete toolset to implement quickly in their own projects.
This is a short introduction to Git, Travis, and Gradle, alls used at command line and jointly with Github. It contains some examples and a few simple exercises that make you to get use to the commands. For further information, please see https://www.blogger.com/blogger.g?blogID=6687556278632539882#editor/target=post;postID=5021188629184529191;onPublishedMenu=allposts;onClosedMenu=allposts;postNum=0;src=postname
Presentation given to Docker Blacksburg Meetup on Feb 8, 2017
Provided a little background to why CI/CD is important. Then, how to actually build it out. Finished off with a lab using GitHub, Docker Hub, and Play with Docker.
Join us for this interactive event and get your hands dirty with some WildFly 9 hacking!
Our host Kabir Khan will explain how you can contribute to the WildFly project at many different levels, from properly reporting bugs in the forums and issue tracker, to actually being able to submit a pull request.
During this interactive event you will have a chance to play with WildFly 9 and try some of the following:
• Find a JIRA you want to work on.
• See how to check-out the code and setup your IDE.
• Build WildFly
• Code walkthrough - code organisation, jboss-modules etc.
• Debug something from a stack trace in a JIRA issue to nail down the problem.
• Try the testsuite
• And more!
Github Actions enables you to create custom software development lifecycle workflows directly in your Github repository. These workflows are made out of different tasks so-called actions that can be run automatically on certain events.
My talk from Dockercon EU in Amsterdam, Dec 2014. Original abstract:
The ModCloth Platform team has been building a Docker-based continuous delivery pipeline. This presentation discusses that project and how we build containers at ModCloth. The topics include what goes into our containers; how to optimize builds to use the Docker build cache effectively; useful development workflows (including using fig); and the key decision to treat containers as processes instead of mini-vms. This presentation will also discuss (and demo!) the workflow we’ve adopted for building containers and how we’ve integrated container builds with our CI.
Володимир Дубенко "Node.js for desktop development (based on Electron library)"Fwdays
How to re-use Web app development skills for desktop development, how to not depend on UI frameworks (like XAML in WPF), how to easily create a cross-platform application, how to stay on top technologies and do not be outdated, how to remove border between desktop and web development.
When Infrastructure becomes so reliable it's boring, apps can shine. Here's how to learn more about configuration management and application deployment for OpenStack clouds like Cisco Metacloud. Delivered at the OpenStack Summit in Barcelona, October 2016.
Automated Infrastructure and Application ManagementClark Everetts
Managing application infrastructure is an error prone, tedious, and often manual process leading to late hours spent troubleshooting self-inflicted oversights. Clark will introduce an open source Chef cookbook automating many steps, which utilizes a server side SDK to painlessly deploy PHP applications, and also show how the process can be managed leveraging Zend Server. Attendees will walk away with a complete toolset to implement quickly in their own projects.
This is a short introduction to Git, Travis, and Gradle, alls used at command line and jointly with Github. It contains some examples and a few simple exercises that make you to get use to the commands. For further information, please see https://www.blogger.com/blogger.g?blogID=6687556278632539882#editor/target=post;postID=5021188629184529191;onPublishedMenu=allposts;onClosedMenu=allposts;postNum=0;src=postname
Presentation given to Docker Blacksburg Meetup on Feb 8, 2017
Provided a little background to why CI/CD is important. Then, how to actually build it out. Finished off with a lab using GitHub, Docker Hub, and Play with Docker.
Pluggable Infrastructure with CI/CD and DockerBob Killen
The docker cluster ecosystem is still young, and highly modular. This presentation covers some of the challenges we faced deciding on what infrastructure to deploy, and a few tips and tricks in making both applications and infrastructure easily adaptable.
Gradle build tool that rocks with DSL JavaOne India 4th May 2012Rajmahendra Hegde
For the long time, we have used various build tools to package applications for new software releases or applying patches to existing applications etc. dependency management, version controlling, scalability, flexibility, single-multiple projects sup portability are some of the key areas that drove the selection of a build tool, This session focuses on Gradle as a successful build tool and looks into all the above areas and uses Groovy as a DSL. We will also look into how easy it is to use Gradle as compared to other open source build tools.
Photos: https://plus.google.com/u/0/photos/105295086916869617504/albums/5739617166453582993
Gradle build tool that rocks with DSL By Rajmahendra Hegde at JavaOne Hyderabad, India on 4th May 2012
Here are slides from basic training for Gradle.
This training is aimed to help Java Developers to get hands-on experience to use Gradle as a primary build tool for Java source code starting from simple compilation continuing with different kinds of tests and finishing with code quality analysis and artefacts publishing.
Think back to when you were 22. What do you wish someone had told you? What would you do differently — and keep the same? Tell us on LinkedIn, using #IfIWere22 somewhere in the body — not the headline — of your post. (And if you are 22(ish), we want to hear your stories too.)
Start writing: https://www.linkedin.com/pulse/article/new
Marc Stickdorn & Jakob Schneider – Mobile ethnography and ExperienceFellow, a...Jakob Schneider
This is the talk Marc and Jakob gave at the #sdgc15 – Service Design Global Conference in New York, 03 October 2015.
By popular demand: That "Base your customer journey on f***ing research" stickers are avaliable at http://mrthinkr.com/
Database Migrations with Gradle and LiquibaseDan Stine
Database migration scripts are a notorious source of difficulty in the software delivery process. This session will discuss how we neutralized this all too common headache.
Now our deployment framework executes database migrations automatically with every application deploy, and the QA team performs self-service full stack deployments in test environments. The resulting additional bandwidth has been invested in more frequent software releases, and the opportunity to focus on higher-value tasks.
Analysis is so important to agile teams they do it every day. Every. Single. Day. In some respects agile teams perform analysis in a very different manner than traditional teams, and in some respects in a very similar manner. Agile analysis is collaborative and evolutionary in nature. Disciplined agile analysis takes it up a notch to address the complexity factors agile teams face at scale.
In this presentation we discuss how disciplined agile teams address analysis activities throughout the lifecycle. The transition to agile requires a mindset, skill set, and very often role change for people who are currently business analysts. On the majority of agile teams the role of business analyst has disappeared, but in some situations at scale the role is of vital importance – this isn’t your father’s software team any more. Lessons learned from several organizations making the transition to agile will be shared.
Key learning points:
• Discover how disciplined agile teams approach analysis, and modeling in general
• Learn agile analysis and modeling strategies
• Discover how business analysts can transition to an agile environment
Video can be found here: https://www.youtube.com/watch?v=-8tR-UbUpvI
As modern, agile architects and developers we need to master several different languages and technologies all at once to build state-of-the-art solutions and yet be 100% productive. We define our development environments using Gradle. We implement our software in Java, Kotlin or another JVM based language. We use Groovy or Scala to test our code at different layers. We construct the build pipelines for our software using a Groovy DSL or JSON. We use YAML and Python to describe the infrastructure and the deployment for our applications. We document our architectures using AsciiDoc and JRuby. Welcome to Babel!
Making the right choices in the multitude of available languages and technologies is not easy. Randomly combining every hip technology out there will surely lead into chaos. What we need is a customized, streamlined tool chain and technology stack that fits the project, your team and the customer’s ecosystem all at once. This code intense, polyglot session is an opinionated journey into the modern era of software industrialization.
Everything-as-code - A polyglot adventureQAware GmbH
Devoxx 2017, Poland: Talk by Mario-Leander Reimer (@LeanderReimer, Principal Software Architect at QAware).
Abstract: As modern, agile architects and developers we need to master several different languages and technologies all at once to build state-of-the-art solutions and yet be 100% productive. We define our development environments using Gradle. We implement our software in Java, Kotlin or another JVM based language. We use Groovy or Scala to test our code at different layers. We construct the build pipelines for our software using a Groovy DSL or JSON. We use YAML and Python to describe the infrastructure and the deployment for our applications. We document our architectures using AsciiDoc and JRuby. Welcome to Babel!
Making the right choices in the multitude of available languages and technologies is not easy. Randomly combining every hip technology out there will surely lead into chaos. What we need is a customized, streamlined tool chain and technology stack that fits the project, your team and the customer’s ecosystem all at once. This code intense, polyglot session is an opinionated journey into the modern era of software industrialization.
Gradle is a general-purpose build automation tool. It combines the power and flexibility of Ant with the dependency management and conventions of Maven into a more effective way to build. Its powered by Groovy DSL. Presentation discusses what and why Gradle with demo for java, groovy, web, multi-project and grails projects.
The Tale of a Docker-based Continuous Delivery Pipeline by Rafe Colton (ModCl...Docker, Inc.
The ModCloth Platform team has been building a Docker-based continuous delivery pipeline. This presentation discusses that project and how we build containers at ModCloth. The topics include what goes into our containers; how to optimize builds to use the Docker build cache effectively; useful development workflows (including using fig); and the key decision to treat containers as processes instead of mini-vms. This presentation will also discuss (and demo!) the workflow we’ve adopted for building containers and how we’ve integrated container builds with our CI.
Jenkins до сих пор один из лидеров CI/CD продуктов. Поэтому стоит понимать, что он может и как этим правильно пользоваться. К тому же, этот проект всё ещё обновляется и нам желательно следить за новыми возможностями, которые он нам даёт.
В этот раз мы поговорим:
– о Jenkins pipelines and shared libraries
– какими они бывают, как и когда их надо использовать,.
– отличиях scripted и declarative вариантов.
– когда необходимо использовать shared library
– как легко настроить и начать пользоваться Jenkins в Kubernetes с использованием Jenkins configuration as code.
Доклад будет актуален для: DevOps engineers, Configuration managers, Developers who are tired of their jobs and they decided to make some Jenkins)
О спикере: Дмитрий Кулешов – DevOps Engineer с 10-летним опытом работы в области информационных технологий.
Faster java ee builds with gradle [con4921]Ryan Cuprak
JavaOne 2016
It is time to move your Java EE builds over to Gradle! Gradle continues to gain momentum across the industry. In fact, Google is now pushing Gradle for Android development. Gradle draws on lessons learned from both Ant and Maven and is the next evolutionary step in Java build tools. This session covers the basics of switching existing Java EE projects (that use Maven) over to Gradle and the benefits you will reap, such as incremental compiling, custom distributions, and task parallelization. You’ll see demos of all the goodies you’ve come to expect, such as integration testing and leveraging of Docker. Switching is easier than you think, and no refactoring is required.
SymfonyCon Madrid 2014 - Rock Solid Deployment of Symfony AppsPablo Godel
Web applications are becoming increasingly more complex, so deployment is not just transferring files with FTP anymore. We will go over the different challenges and how to deploy our PHP applications effectively, safely and consistently with the latest tools and techniques. We will also look at tools that complement deployment with management, configuration and monitoring.
Nagios Conference 2014 - Mike Merideth - The Art and Zen of Managing Nagios w...Nagios
Mike Merideth's presentation on The Art and Zen of Managing Nagios with Puppet.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
47. Participating
• Use individual plugins
• Get on nebula-plugins
Google Group
• Move your plugin to
nebula-plugins
• Start a new plugin in
nebula-plugins
I work on the Engineering Tools team at Netflix, where we build tools to support the developers. Today I’m here to talk about Netflix Build Language, or as we like to call it, nebula.
And I’m here to talk about how we try to exist in the open source space with plugins.
We had to transition out of a Ant/Ivy. Primarily a JVM shop, there are a few players in this space for build. A lot of conventions were in place and changing them to fit Maven’s model wasn’t an option.
Look after the users, and where we can add value.
Just couldn’t get the experience as smooth as we wanted it in our enterprise environment. Still haven’t.
Patched Version of Gradle - Patch to cover a bug in Ivy. Patch to expose status/statusScheme.
Custom Distribution - So we can embed some init.d scripts, e.g. add our repository servers and add our plugin’s jar.
Custom Wrapper - To force some variables getting set, like memory defaults or GRADLE_USER_HOME. Plans on customizing to support for re-downloading of static URL
End user sees apply plugin nebula. Nebula extension block to hold other extensions via @Delegate annotation.
NetflixOSS pre-dated our internal work, but it was going down the Maven route. And we had a goal that we wanted to move to Gradle. We also had concerns about how to integration POMs and our internal Ivy files.
That was done in a way to make the Gradle work really obvious, i.e. minimal work hidden in plugins, no custom DSL. It’s hideous at this point and based on Gradle 1.6. We had to rush it, to pre-empt projects going out with Maven. github/cloudbees/bintray project plugin forthcoming.
NetflixOSS fell way behind our internal development. Majority of our work was not Netflix specific, but turned out to just be what a responsible project needs.
Also had growing pains with our plugin, so we wanted to decouple them.
Wanted to make new plugins easy, with CI and SCM. We’re build people, right?
Found that many plugins we wanted to use also suffered from basic release engineering practices, which is clearly ironic given the space we’re in.
Group plugin with SCM, CI and Deployment. Chosen of familiarity, I wish I had looked at more options. Common patterns to give the apply plugin nebula like experience, enabled via nebula-plugin-plugin. Lot of hard coding.
Exists as a dedicated Organization, with project names following a pattern.
We learned from the @NetflixOSS work that GitHub doesn’t maintain itself.
We have to eliminate the manual work. And Jenkins is a great place to do that, except that setting up Jenkins is manual work. I don’t want to fight Jenkins, so we have a script to manage this.
The nebula prefix for highly opinionated plugins. While the gradle prefix is for plugins that have general applicability. Many of them are used in the plugin-plugin since they’re so helpful.
Not really a plugin, but integrated into the Gradle ecosystem. Meant to be used in tests.
* ProjectSpec has its limitation, since you can only really apply plugins. Though you can run a hidden evaluate(). Really fast and recommended when ever possible
PluginProjectSpec is a small addition that just applies your plugin by name, tries it twice and tries it in multi-module project.
IntegrationSpec run a real Gradle build.
All the nebula-plugins use these Test classes, so there’s plenty of examples.
Helper methods to create a project, to run a project, and to evaluate the effects of a execution. Will get it’s own directory to run in.
GradleLauncher is necessary for some in-memory checks or debugging.
Not really a plugin, has tasks and Helper method. Meant for runtime.
AlternativeArchiveTask is to provide an implementation of AbstractArchiveTask, which is not also a CopyTask, since those are greaten special.
beforeEvaluate is actually before after evaluates
getTempDir gives a build directory for a task
addDefaultGroup lets you set a group, optionally. Hard otherwise because project.getGroup() will provide the parent project’s name as the group.
Finally a real plugin. Though docs are the worst of the bunch, this is the most important. Actually a bunch of plugins. All meant to get a publication looking just right, down to the signing.
Artifact plugins attempt to make the jars (which we’d all expect).
Publishing plugins attempt to make resulting Ivy/POM file cleaner, primarily by using resolved variables and including excludes.
nebula-sign looks for magical properties and conditionally signs, unlike other approached.
To support these, we needed to use Publish 2.0, which isn’t fully baked. We were unable to use as is, so we made our CustomComponent. Allows any plugins to contribute artifacts, with more control of the resulting dependencies and confs (more important in Ivy).
Two plugins are available for ivy vs maven, nebula-ivy-publishing and nebula-maven-publishing.
‘info-java' version of java being used
‘info-ci' tells about CI system and current build
‘info-scm' derives info about current SCM
‘info-jar' injects into the Jar’s MANIFEST.MF
‘info-props' creates file with values
‘info-jar-props' puts property file in jar for later retrieval
All go through a broker that you can listen to.
contacts-manifest puts people into the JAR manifest
contacts-pom puts people into developers section of POM
Example of how we believe some plugins can talk to each other without a DSL in the way.
Then we can send emails on release by the notify role.
We believe in a reproducible build for every developer all the time. Dynamic versions make that hard. We can’t imagine manually editing versions. We’ve all had the experience of a new dependency get published and everyone is broken. We also have cases where we want to tweak a specific module, for a patch, which can be done via the command line. Inspired by systems like Bundler.io.
Let some automated job update your dependencies when they’ve proven valid.
Properly sets up classpath.
Does the things we’d expect in a responsible project. Even if they don’t like it.
E.g. doesn’t fail on Javadoc error, maintains status even though the java plugin wants to walk all over it.
Adds OJO and Bintray, with publishing. Plugin Portal additions.
Needs to be extracted out for different orgs, different package names. Very meta, since it applies itself.
Most popular plugin, even though we haven’t advertised them at all. Takes the CopySpec idea and applies it to redline and jdeb.
buildRpm and buildDeb are implicitly created, but can still be configured if needed.
Proposed.
Anytime we find a problem internally, we make a plugin to test the problem and fix it. Roll it out and get happy users. Many times, we’re wrapping another plugin and configuring them with our defaults.
Lots of withType or .all {} calls
Ability to create tasks in reaction
Also try to abstract out logic out of Task, so that it can be called sequentially.
Sometimes we even got a third level
Configure tasks
Reacting to the user before the task graph, forces afterEvaluate’s
NamedDomainObjectSet configuration comes after events, so no ability to re-act, except by name
@Outputs have to be Files, not in-memory String. Evaluation of File names for outputs can be tricky.
Can’t debug tests through Tooling API