SBT (Simple Build Tool) is a build tool for Scala and Java applications. It is used to compile code, create artifacts, manage dependencies, run tests, and more. SBT projects have a standard directory structure and are configured using a build.sbt file or Build.scala file where settings like the name, version, dependencies, and other configurations are defined. SBT supports features like incremental compilation, cross building for multiple Scala versions, and publishing artifacts to repositories.
An introduction to SBT and how it works internally.
Talk from September 2013 Slovak Scala User Group meet-up, http://www.meetup.com/slovak-scala/events/133327122/
The document provides instructions for getting started with SBT (Simple Build Tool) for Scala projects. It outlines 8 tasks to help learn the basics of using SBT to compile, test, and publish Scala code. The tasks include creating a simple "Hello World" project in SBT, adding dependencies, writing and running tests, using the SBT console, publishing code locally, and adding plugins. The document also provides a brief overview of key SBT concepts like settings, tasks, and keys.
SBT Concepts, part 2 discusses SBT project structure and commands. It explains how to create an SBT project with directories for sources and resources. The document shows how to define build settings in build.sbt or a custom Build.scala file. It demonstrates common SBT commands like compile, run, console, and how to view settings and tasks. Finally, it provides an overview of configurations, plugins, and delegates in SBT.
SBT provides several commands for debugging and exploring builds:
- settings -V and tasks -V to view settings and tasks matching a regex
- inspect and inspect tree to view types and relationships of settings/tasks
- show to display values of settings and tasks
An SBT build defines a transformation of an immutable map through Setting objects. These can define SettingKeys, TaskKeys, and InputKeys to modify the map. Settings are scoped to projects, configurations, and tasks. The sbt-assembly plugin introduces new tasks to package assemblies for publishing.
Ship your Scala code often and easy with DockerMarcus Lönnberg
This document discusses how to build and ship Scala code using Docker containers. It introduces Docker and shows how to define Dockerfiles and build Docker images from sbt projects using the sbt-docker and sbt-assembly plugins. It also demonstrates how to manage Docker containers from Scala code and how Docker can be used to automate build and deployment pipelines.
Custom deployments with sbt-native-packagerGaryCoady
sbt-native-packager offers a comprehensive approach to packaging artifacts with SBT. The user describes a generic layout, which can then be extended for different types of software and deployments. For example, it is flexible enough to describe both a Zip-based archive format, and an RPM package with appropriate Systemd configuration for a service.
This talk will cover the essentials needed to understand the design of sbt-native-packager, and how to extend its structure to create custom layouts and deployments.
An introduction to SBT and how it works internally.
Talk from September 2013 Slovak Scala User Group meet-up, http://www.meetup.com/slovak-scala/events/133327122/
The document provides instructions for getting started with SBT (Simple Build Tool) for Scala projects. It outlines 8 tasks to help learn the basics of using SBT to compile, test, and publish Scala code. The tasks include creating a simple "Hello World" project in SBT, adding dependencies, writing and running tests, using the SBT console, publishing code locally, and adding plugins. The document also provides a brief overview of key SBT concepts like settings, tasks, and keys.
SBT Concepts, part 2 discusses SBT project structure and commands. It explains how to create an SBT project with directories for sources and resources. The document shows how to define build settings in build.sbt or a custom Build.scala file. It demonstrates common SBT commands like compile, run, console, and how to view settings and tasks. Finally, it provides an overview of configurations, plugins, and delegates in SBT.
SBT provides several commands for debugging and exploring builds:
- settings -V and tasks -V to view settings and tasks matching a regex
- inspect and inspect tree to view types and relationships of settings/tasks
- show to display values of settings and tasks
An SBT build defines a transformation of an immutable map through Setting objects. These can define SettingKeys, TaskKeys, and InputKeys to modify the map. Settings are scoped to projects, configurations, and tasks. The sbt-assembly plugin introduces new tasks to package assemblies for publishing.
Ship your Scala code often and easy with DockerMarcus Lönnberg
This document discusses how to build and ship Scala code using Docker containers. It introduces Docker and shows how to define Dockerfiles and build Docker images from sbt projects using the sbt-docker and sbt-assembly plugins. It also demonstrates how to manage Docker containers from Scala code and how Docker can be used to automate build and deployment pipelines.
Custom deployments with sbt-native-packagerGaryCoady
sbt-native-packager offers a comprehensive approach to packaging artifacts with SBT. The user describes a generic layout, which can then be extended for different types of software and deployments. For example, it is flexible enough to describe both a Zip-based archive format, and an RPM package with appropriate Systemd configuration for a service.
This talk will cover the essentials needed to understand the design of sbt-native-packager, and how to extend its structure to create custom layouts and deployments.
This document summarizes a presentation about using Gradle to build Scala and Play applications. It discusses Scala and Play support in Gradle, continuous compilation mode, and demos building a basic Play app and using hot reload in Gradle. The presenter is a LinkedIn software engineer who provides an overview of Scala, Play, Gradle, and the benefits of using Gradle for Play application builds.
The document provides an introduction to Gradle, an open source build automation tool. It discusses that Gradle is a general purpose build system with a rich build description language based on Groovy. It supports "build-by-convention" and is flexible and extensible, with built-in plugins for Java, Groovy, Scala, web and OSGi. The presentation covers Gradle's basic features, principles, files and collections, dependencies, multi-project builds, plugins and reading materials.
Gradle build tool that rocks with DSL JavaOne India 4th May 2012Rajmahendra Hegde
For the long time, we have used various build tools to package applications for new software releases or applying patches to existing applications etc. dependency management, version controlling, scalability, flexibility, single-multiple projects sup portability are some of the key areas that drove the selection of a build tool, This session focuses on Gradle as a successful build tool and looks into all the above areas and uses Groovy as a DSL. We will also look into how easy it is to use Gradle as compared to other open source build tools.
Photos: https://plus.google.com/u/0/photos/105295086916869617504/albums/5739617166453582993
Gradle build tool that rocks with DSL By Rajmahendra Hegde at JavaOne Hyderabad, India on 4th May 2012
Gradle is a flexible, open source build automation tool that uses Groovy as a domain-specific language to define build logic and configuration. It is based on the principle of convention over configuration and provides a rich set of tasks and a directed acyclic graph (DAG) model to declaratively define and manipulate the execution of tasks. Gradle aims to provide a powerful yet user-friendly alternative to tools like Ant and Maven for compiling code, generating packages and archives, managing dependencies, and more.
This document discusses best practices for writing idiomatic Gradle plugins, including:
1. Making the plugin DSL readable, consistent, flexible and expressive.
2. Supporting the same Java versions as Gradle for compatibility.
3. Preferring methods over properties and using annotations properly.
4. Handling collections, maps, overriding dependencies, generated code, extensions and more idiomatically.
This document introduces GradleFx, a Flex build tool that uses Gradle. It discusses key features of GradleFx such as supporting SWC, SWF, and AIR compilation; tasks for cleaning, compiling, packaging, and testing; and conventions for project structure and dependencies. Advanced topics covered include compiler options, JVM arguments, dependency configurations, and additional steps for AIR projects and FlexUnit testing. An example Gradle build script is provided.
This document provides a 3 sentence summary of the presentation "Idiomatic Gradle Plugin Writing" by Schalk W. Cronjé:
The presentation discusses best practices for writing Gradle plugins, including using consistent and readable extensions to the Gradle DSL, supporting offline mode, testing plugins against multiple Gradle versions, and extending existing task types when needed rather than forcing users to use standard configurations. It provides examples of idiomatic ways to handle collections, maps, dependencies, and project extensions within Gradle plugins. The presentation aims to promote quality attributes like readability, consistency, flexibility and expressiveness in plugin authoring.
This document discusses Jenkins Pipelines, which allow defining continuous integration and delivery (CI/CD) pipelines as code. Key points:
- Pipelines are defined using a Groovy domain-specific language (DSL) for stages, steps, and environment configuration.
- This provides configuration as code that is version controlled and reusable across projects.
- Jenkins plugins support running builds and tests in parallel across Docker containers.
- Notifications can be sent to services like Slack on failure.
- The Blue Ocean UI in Jenkins focuses on visualization of pipeline runs.
Gradle is a flexible general purpose build system with a build-by-convention framework a la Maven on top. It uses Apache Ivy under the hood for its dependency management. Its build scripts are written in Groovy.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me to download the slides
Gradle is an open source build automation tool that uses Groovy for its build configuration files rather than XML like Maven. It offers features like incremental compilation, parallel task execution, and a built-in dependency management system. Projects can be configured as multi-module builds with hierarchical or flat layouts. Gradle supports plugins for tasks like compilation, testing, packaging, and publishing. It integrates with IDEs like IntelliJ, Eclipse, and NetBeans and can be used to build Java EE applications and other projects.
Continuous Integration and DevOps with Open Build Service(OBS)Ralf Dannert
speech/workshop held at 6. Secure Linux Administration Conference(SLAC) in Berlin
content is mixed English/German
* Open Buildservice (OBS)
* typical workflows with OBS
* command-line tool (osc)
* source Services in OBS
* continuous integration with Jenkins (track upstream github)
* Image build(iso) in OBS using KIWI
Gradle is a general-purpose build automation tool. It combines the power and flexibility of Ant with the dependency management and conventions of Maven into a more effective way to build. Its powered by Groovy DSL. Presentation discusses what and why Gradle with demo for java, groovy, web, multi-project and grails projects.
openQA hands on with openSUSE Leap 42.1 - openSUSE.Asia Summit ID 2016Ben Chou
This document provides an overview of openQA and instructions for installing and configuring openQA on openSUSE Leap 42.1. It describes openQA's system architecture and workflow, and includes workshops to install openQA, configure the web UI and a worker, manage API keys, configure test settings, and run an openSUSE installation test.
The document discusses build tools like Ant, Maven, and Gradle. It provides an overview of each tool's history and capabilities. Gradle is presented as a build tool that aims to improve on Ant and Maven by allowing builds to be written in a Groovy-based domain-specific language for improved flexibility. The document also demonstrates several Gradle features like tasks, dependencies, plugins, and multi-project builds.
This document discusses best practices for writing idiomatic Gradle plugins. Some key points include:
- Use methods over properties for flexibility and readability in the DSL.
- Support the same JDK range as Gradle for compatibility.
- Use annotations like @Input, @Output to define task attributes.
- Provide three methods (get, set, add) for collection attributes rather than using properties.
- Extend existing tasks through extensions rather than reimplementing them. Cache extension attributes.
- Generate code by configuring a Copy task as a generator and adding it to the correct source set.
This document outlines an agenda for a day with the Scala build tool sbt. It will cover basics of sbt like build definitions in Scala code and features like being interactive and parallel task execution. The presenter will also demonstrate sbt through live coding and discuss must have sbt plugins. There will be an opportunity for questions and a note that the presenter's company is hiring.
The document discusses how to install sbt, the Simple Build Tool, on Mac, Windows, and Linux. It provides instructions for installing sbt through third-party packages, universal packages, Typesafe Activator, or manually. On Linux, it recommends using package managers like apt-get on Debian/Ubuntu systems to install from a DEB package or downloading a universal package for other distributions. The document also covers creating a basic "Hello, World" sbt project.
This document summarizes a presentation about using Gradle to build Scala and Play applications. It discusses Scala and Play support in Gradle, continuous compilation mode, and demos building a basic Play app and using hot reload in Gradle. The presenter is a LinkedIn software engineer who provides an overview of Scala, Play, Gradle, and the benefits of using Gradle for Play application builds.
The document provides an introduction to Gradle, an open source build automation tool. It discusses that Gradle is a general purpose build system with a rich build description language based on Groovy. It supports "build-by-convention" and is flexible and extensible, with built-in plugins for Java, Groovy, Scala, web and OSGi. The presentation covers Gradle's basic features, principles, files and collections, dependencies, multi-project builds, plugins and reading materials.
Gradle build tool that rocks with DSL JavaOne India 4th May 2012Rajmahendra Hegde
For the long time, we have used various build tools to package applications for new software releases or applying patches to existing applications etc. dependency management, version controlling, scalability, flexibility, single-multiple projects sup portability are some of the key areas that drove the selection of a build tool, This session focuses on Gradle as a successful build tool and looks into all the above areas and uses Groovy as a DSL. We will also look into how easy it is to use Gradle as compared to other open source build tools.
Photos: https://plus.google.com/u/0/photos/105295086916869617504/albums/5739617166453582993
Gradle build tool that rocks with DSL By Rajmahendra Hegde at JavaOne Hyderabad, India on 4th May 2012
Gradle is a flexible, open source build automation tool that uses Groovy as a domain-specific language to define build logic and configuration. It is based on the principle of convention over configuration and provides a rich set of tasks and a directed acyclic graph (DAG) model to declaratively define and manipulate the execution of tasks. Gradle aims to provide a powerful yet user-friendly alternative to tools like Ant and Maven for compiling code, generating packages and archives, managing dependencies, and more.
This document discusses best practices for writing idiomatic Gradle plugins, including:
1. Making the plugin DSL readable, consistent, flexible and expressive.
2. Supporting the same Java versions as Gradle for compatibility.
3. Preferring methods over properties and using annotations properly.
4. Handling collections, maps, overriding dependencies, generated code, extensions and more idiomatically.
This document introduces GradleFx, a Flex build tool that uses Gradle. It discusses key features of GradleFx such as supporting SWC, SWF, and AIR compilation; tasks for cleaning, compiling, packaging, and testing; and conventions for project structure and dependencies. Advanced topics covered include compiler options, JVM arguments, dependency configurations, and additional steps for AIR projects and FlexUnit testing. An example Gradle build script is provided.
This document provides a 3 sentence summary of the presentation "Idiomatic Gradle Plugin Writing" by Schalk W. Cronjé:
The presentation discusses best practices for writing Gradle plugins, including using consistent and readable extensions to the Gradle DSL, supporting offline mode, testing plugins against multiple Gradle versions, and extending existing task types when needed rather than forcing users to use standard configurations. It provides examples of idiomatic ways to handle collections, maps, dependencies, and project extensions within Gradle plugins. The presentation aims to promote quality attributes like readability, consistency, flexibility and expressiveness in plugin authoring.
This document discusses Jenkins Pipelines, which allow defining continuous integration and delivery (CI/CD) pipelines as code. Key points:
- Pipelines are defined using a Groovy domain-specific language (DSL) for stages, steps, and environment configuration.
- This provides configuration as code that is version controlled and reusable across projects.
- Jenkins plugins support running builds and tests in parallel across Docker containers.
- Notifications can be sent to services like Slack on failure.
- The Blue Ocean UI in Jenkins focuses on visualization of pipeline runs.
Gradle is a flexible general purpose build system with a build-by-convention framework a la Maven on top. It uses Apache Ivy under the hood for its dependency management. Its build scripts are written in Groovy.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me to download the slides
Gradle is an open source build automation tool that uses Groovy for its build configuration files rather than XML like Maven. It offers features like incremental compilation, parallel task execution, and a built-in dependency management system. Projects can be configured as multi-module builds with hierarchical or flat layouts. Gradle supports plugins for tasks like compilation, testing, packaging, and publishing. It integrates with IDEs like IntelliJ, Eclipse, and NetBeans and can be used to build Java EE applications and other projects.
Continuous Integration and DevOps with Open Build Service(OBS)Ralf Dannert
speech/workshop held at 6. Secure Linux Administration Conference(SLAC) in Berlin
content is mixed English/German
* Open Buildservice (OBS)
* typical workflows with OBS
* command-line tool (osc)
* source Services in OBS
* continuous integration with Jenkins (track upstream github)
* Image build(iso) in OBS using KIWI
Gradle is a general-purpose build automation tool. It combines the power and flexibility of Ant with the dependency management and conventions of Maven into a more effective way to build. Its powered by Groovy DSL. Presentation discusses what and why Gradle with demo for java, groovy, web, multi-project and grails projects.
openQA hands on with openSUSE Leap 42.1 - openSUSE.Asia Summit ID 2016Ben Chou
This document provides an overview of openQA and instructions for installing and configuring openQA on openSUSE Leap 42.1. It describes openQA's system architecture and workflow, and includes workshops to install openQA, configure the web UI and a worker, manage API keys, configure test settings, and run an openSUSE installation test.
The document discusses build tools like Ant, Maven, and Gradle. It provides an overview of each tool's history and capabilities. Gradle is presented as a build tool that aims to improve on Ant and Maven by allowing builds to be written in a Groovy-based domain-specific language for improved flexibility. The document also demonstrates several Gradle features like tasks, dependencies, plugins, and multi-project builds.
This document discusses best practices for writing idiomatic Gradle plugins. Some key points include:
- Use methods over properties for flexibility and readability in the DSL.
- Support the same JDK range as Gradle for compatibility.
- Use annotations like @Input, @Output to define task attributes.
- Provide three methods (get, set, add) for collection attributes rather than using properties.
- Extend existing tasks through extensions rather than reimplementing them. Cache extension attributes.
- Generate code by configuring a Copy task as a generator and adding it to the correct source set.
This document outlines an agenda for a day with the Scala build tool sbt. It will cover basics of sbt like build definitions in Scala code and features like being interactive and parallel task execution. The presenter will also demonstrate sbt through live coding and discuss must have sbt plugins. There will be an opportunity for questions and a note that the presenter's company is hiring.
The document discusses how to install sbt, the Simple Build Tool, on Mac, Windows, and Linux. It provides instructions for installing sbt through third-party packages, universal packages, Typesafe Activator, or manually. On Linux, it recommends using package managers like apt-get on Debian/Ubuntu systems to install from a DEB package or downloading a universal package for other distributions. The document also covers creating a basic "Hello, World" sbt project.
Data processing platforms architectures with Spark, Mesos, Akka, Cassandra an...Anton Kirillov
This talk is about architecture designs for data processing platforms based on SMACK stack which stands for Spark, Mesos, Akka, Cassandra and Kafka. The main topics of the talk are:
- SMACK stack overview
- storage layer layout
- fixing NoSQL limitations (joins and group by)
- cluster resource management and dynamic allocation
- reliable scheduling and execution at scale
- different options for getting the data into your system
- preparing for failures with proper backup and patching strategies
El Banco de la República es el banco central de Colombia que fue creado en 1923 mediante la Ley 25. Tiene la función de regular la moneda, los cambios internacionales y el crédito del país, emitir la moneda legal colombiana y administrar las reservas internacionales.
O documento descreve o Sistema Nacional de Mobilização (SINAMOB) no Brasil. O SINAMOB é composto por órgãos governamentais que planejam e realizam a mobilização e desmobilização nacional. Ele é coordenado pelo Ministério da Defesa e inclui ministérios como Justiça, Relações Exteriores e Planejamento. O SINAMOB pode requisitar informações de estados e municípios e é regido por princípios como permanência, flexibilidade e coordenação.
Este documento describe los virus informáticos, incluyendo su definición, orígenes, características, tipos y síntomas de infección. Explica que un virus es un programa malicioso que se replica a sí mismo para infectar otros archivos o sistemas. También resume los tipos principales de antivirus y enfatiza que ningún sistema de seguridad es 100% efectivo contra todos los virus.
This treatment sheet outlines plans for the lighting, camerawork, and performance aspects of a music video for Florence + The Machine's song "Cosmic Love". It proposes using different lighting styles to differentiate the narrative and performance sections. Various camera shots like medium shots, close-ups, and handheld footage are discussed to showcase the location, emotions, and body language. The performance will show the protagonist Hayley's journey from nervousness to finding strength through dancing and posture that connects to the song's meaning.
Worked on strategy articulation with business owners for various projects.
Helped businesses expose their proposition through websites, social-media and direct marketing.
Working on numerous projects ranging from education, entertainment and security solutions.
Worked on E-commerce projects | Online Shopping | Payment Gateway Implementation
We have an in-house team specializing digital marketing, marketing operations, strategy and development operations.
We believe technology enables marketing to do a better job for today's businesses. We dedicate ourselves to make a positive change in your business. We believe innovation happens on a very ground level of "doing things". Therefore, our ability to solve our customers' business problems using technology keeps us thriving.
This document defines culture and civilization and discusses the classic Khmer civilization from the 9th to 15th centuries. It provides definitions of culture as the cumulative knowledge and traditions shared by a group that are learned and passed down over generations. Civilization is defined as the advancement of human society through the development of knowledge, rules, beliefs and justice. The classic Khmer civilization demonstrated elements of civilization through its unique culture, social structures, and integration of foreign influences while maintaining its traditions.
A Power-point presentation showing the progress we went through to construct and evaluate our preliminary task in order to prepare for our main task (which we will soon be starting).
Dokumen tersebut memberikan definisi istilah-istilah yang sering digunakan dalam industri tambang batubara. Beberapa istilah yang dijelaskan adalah jenis-jenis batubara seperti antrasit, bitumen, dan lignit; proses-proses seperti penambangan, penggerusan, dan pencucian; serta komposisi batubara seperti kadar abu, karbon, dan belerang. Definisi-definisi tersebut berguna untuk memahami operasi tambang
O documento descreve o Sistema de Mobilização do Exército (SIMOBE), incluindo sua finalidade de integrar processos e sistemas para gerenciar a mobilização militar terrestre, seus princípios como confiabilidade e interoperabilidade, e objetivos como manter registros de recursos militares e integrar com o SISMOMIL.
O documento explica a diferença entre Empresas de Interesse da Defesa Nacional (EIDN) e Empresas Estratégicas de Defesa (EED). EED são empresas credenciadas pelo Ministério da Defesa que possuem acesso a regimes tributários e financiamentos especiais para projetos de defesa, enquanto EIDN participam de um banco de dados do Ministério da Defesa mas não são credenciadas como EED.
Proses metallurgi terdiri dari beberapa tahapan untuk mengekstrak logam dari bijihnya, meliputi pengecilan ukuran butir bijih, konsentrasi untuk memisahkan mineral berharga, dan ekstraksi logam dari konsentrat menggunakan pirometalurgi, hidrometalurgi, atau elektrometalurgi. Logam akhir kemudian dimurnikan dan diolah lebih lanjut jika diperlukan.
1. Presentasi bisnis Vnet Club mengenalkan komunitas pengguna handphone yang saling memberi keuntungan.
2. Vnet Club menyediakan layanan isi ulang pulsa elektronik, asuransi, program loyalitas dan lainnya melalui SMS.
3. Bisnis Vnet Club didasarkan pada sistem pemasaran viral dengan keuntungan bonus rekrutmen anggota baru.
This document provides an introduction to using Scala, including how to install Scala, use the Scala interactive console and compile Scala scripts and programs. It also discusses SBT (Scala Build Tool) for managing Scala projects and dependencies, and introduces some useful Scala frameworks like Xitrum for building web applications and Akka for building concurrent and distributed applications.
The document discusses Gradle, an open-source build automation tool. It provides an overview of Gradle's benefits such as scripting flexibility, incremental builds, and IDE project generation. It also covers key Gradle concepts like dependency management, testing, publishing artifacts, and custom tasks/plugins.
How to integrate_custom_openstack_services_with_devstackSławomir Kapłoński
DevStack is a tool used to quickly deploy OpenStack from source code for development and testing purposes. Plugins allow custom OpenStack services to be integrated with DevStack. A plugin contains scripts that are executed at different points during deployment to install and configure the custom service. Functions are provided to help with common tasks like installing packages or configuring services.
The document discusses implementing quality on Java projects. It provides five tips for ensuring quality: (1) maintaining API stability by avoiding deprecations and changes to public interfaces, (2) preventing "JAR hell" by avoiding duplicate dependencies and version conflicts, (3) enforcing high test coverage using tools like Jacoco, (4) improving stability of functional tests by filtering false positives in CI builds, and (5) dedicating time regularly for fixing bugs through a "Bug Fixing Day".
This document provides an overview of Apache Maven, including:
- Maven is a software project management and comprehension tool based on conventions like standardized project descriptors (POMs) and build lifecycles.
- Key concepts include dependencies, versions, profiles, repositories, and a plugin-based architecture that supports custom goals and extensions.
- Maven 3.x focused on improving backward compatibility, performance through parallel builds and caching, and extensibility through new APIs and classloader partitioning for plugins.
The document introduces the Play Framework version 2.1 and highlights its key features. It demonstrates building a sample application in Scala using Play's reactive, non-blocking architecture. Key features discussed include Play's built-in support for Scala, reactive programming, JSON APIs, routing, templates, and testing.
Riga Dev Day - Automated Android Continuous IntegrationNicolas Fränkel
This document discusses setting up continuous integration for Android projects. It describes issues with dependencies like Gradle and Robolectric not working properly due to proxy restrictions. It proposes solutions like using a local Maven repository, configuring Gradle properties, and creating a custom Robolectric test runner and dependency resolver. It also addresses problems updating the Android SDK due to needing proxy authentication and license agreements. An Expect script is created to automate providing the credentials and agreeing to licenses during the SDK update process.
The document provides an overview of using sbt (Scala Build Tool) including:
- Installing sbt and creating a basic project structure
- Common sbt commands like compile, run, and test
- Defining settings and tasks in build.sbt
- Referencing settings from tasks and getting task results
- Using scopes to define values for specific projects or configurations
- Plugins that add additional functionality to sbt
Maven is a project management and comprehension tool that handles builds, reporting, and handling of dependencies. It uses a Project Object Model (POM) file to manage projects. The POM file contains metadata like dependencies, plugins, and configurations. Maven standardizes builds through lifecycles and phases. It manages dependencies through a repository of artifacts. Liferay has integrated Maven support to allow plugin development with Maven through plugins, archetypes, and prebuilt EE artifacts.
Maven is a build tool that can be used to develop Liferay plugins. It provides dependency management, a common lifecycle, and conventions for building, testing, and deploying projects. The Liferay Maven support includes Liferay artifacts in Maven repositories, a Maven plugin for plugin development features, and archetypes for generating different plugin types. A demo showed creating a parent project and modules for a theme, service builder, and ext plugin using Maven. Future plans include improved IDE integration and more archetypes.
In-Cluster Continuous Testing Framework for Docker ContainersNeil Gehani
1. Tugbot is an open source in-cluster continuous testing framework for Docker containers that allows running tests inside containers in any environment.
2. It monitors Docker events like image updates and new containers to trigger automated tests. Test results are collected, stored in Elasticsearch, and visualized in Kibana.
3. Tugbot aims to simplify and standardize testing in continuous integration/delivery pipelines to improve software quality and catch issues early.
Maven is a build tool that provides a uniform build system with guidelines for best practices. It uses a Project Object Model (POM) XML file to define project information, relationships, and build settings. Maven projects have a default directory structure and life cycle made up of phases and goals. It supports multi-module projects to organize builds. Maven uses a local then remote repository to resolve dependencies. Artifactory is a repository that can host internal libraries. A complete build solution integrates Maven, source control, continuous integration, repositories, and code quality tools.
IBM Index 2018 Conference Workshop: Modernizing Traditional Java App's with D...Eric Smalling
Slides from my 2.5 hour hands-on workshop covering Docker basics, the Docker MTA program and how it applies to legacy Java applications and some tips on running those apps in containers in production.
Ansible is a Configuration Management System that is very simple to use, because of its straightforward and robust model for managing automation and it’s low barrier to entry for ease of use in both development and production.
During OpenStack development, Ansible can be used in conjunction with Vagrant and Devstack to manage complex, multi-node development environments with relative ease.
In this presentation, Juergen Brendel and David Lapsley review Ansible and provide some sample playbooks to get developers up and running quickly. They also describes how to use Ansible, Vagrant, Devstack, and OpenStack to accelerate OpenStack development cycles.
The document discusses using Maven to implement a continuous deployment pipeline. It addresses how to structure Maven projects to support various test stages like integration and acceptance testing in separate modules. It also provides solutions to issues Maven causes, such as rebuilding artifacts unnecessarily and an inability to simulate release versions, through the use of unique versioning and the Versions plugin. Continuous deployment is achieved by running tests and deploying builds from separate modules after each commit.
Oscon London 2016 - Docker from Development to ProductionPatrick Chanezon
Docker revolutionized how developers and operations teams build, ship, and run applications, enabling them to leverage the latest advancements in software development: the microservice architecture style, the immutable infrastructure deployment style, and the DevOps cultural model.
Existing software layers are not a great fit to leverage these trends. Infrastructure as a service is too low level; platform as a service is too high level; but containers as a service (CaaS) is just right. Container images are just the right level of abstraction for DevOps, allowing developers to specify all their dependencies at build time, building and testing an artifact that, when ready to ship, is the exact thing that will run in production. CaaS gives ops teams the tools to control how to run these workloads securely and efficiently, providing portability between different cloud providers and on-premises deployments.
Patrick Chanezon offers a detailed overview of the latest evolutions to the Docker ecosystem enabling CaaS: standards (OCI, CNCF), infrastructure (runC, containerd, Notary), platform (Docker, Swarm), and services (Docker Cloud, Docker Datacenter). Patrick ends with a demo showing how to do in-container development of a Spring Boot application on a Mac running a preconfigured IDE in a container, provision a highly available Swarm cluster using Docker Datacenter on a cloud provider, and leverage the latest Docker tools to build, ship, and run a polyglot application architected as a set of microservices—including how to set up load balancing.
- The document discusses the Simple Build Tool (sbt) and how it can be used to define Scala projects and their dependencies.
- It describes the structure of sbt's build.sbt file which defines project settings, dependencies, and repositories.
- Useful sbt plugins are mentioned like sbt-idea and sbteclipse to generate IDE project files, and sbt-assembly to build single JAR files. Common sbt tasks are also listed.
- Integration of sbt projects with IntelliJ IDEA and Eclipse IDEs is covered, with IDEA having better support and integration with sbt than Eclipse.
The document discusses property-based testing with ScalaCheck. It introduces property-based testing and provides examples of using ScalaCheck to define generators, test properties universally, and generate test cases. Key points include: limitations of traditional testing techniques, a first example of property-based testing with ScalaCheck to test the max function, defining inputs using generators, generating test cases, introducing algebraic data types and generating lists/collections of values.
This document discusses writing a domain specific language (DSL) for data transformations using applicative functors in Scala. It introduces the concepts of Picker, Reader, and Result to parse heterogeneous data formats into a common format. Reader is defined as an applicative functor to allow combining multiple readers. Later, Reader is enhanced to take type parameters for both input and output to avoid reparsing data and support XML parsing. Type lambdas are used to make Reader work as an applicative functor.
Introducing Monads and State Monad at PSUGDavid Galichet
This document discusses using the State monad to model a game where two robots move on a playground according to instructions, gathering coins. It first introduces functors, monads and for comprehensions in Scala. It then models the game state as a playground containing robots with positions and scores. The robots evolve by processing instructions in a state monad, allowing functional, pure modeling of state changes. Processing all instructions results in a state monad that can be run with the initial playground state.
The document describes using the State monad to model a simulation of two robots moving on a playground and gathering coins in a purely functional way. Key points:
- The simulation involves modeling state as robots move through positions on a playground and scores are updated
- Types like Position, Robot, Playground are defined to represent the game state
- The State monad is introduced to abstract over state manipulations in a functional way
- Methods like processInstruction are used to update the state after each robot move or turn
- compileInstructions uses the State monad and recursion to process the robot instructions lists while chaining the state updates
This document summarizes a presentation on demystifying Scala's type system. It covers key topics like types and variance, type bounds, abstract type members, ad-hoc polymorphism, existential types, and generalized type constraints. The schedule lists these topics to be covered in the presentation.
This document provides an overview of cryptography concepts including hashing, symmetric encryption, asymmetric encryption and digital signatures. It discusses common hashing and encryption algorithms and how they are used for authentication, integrity, privacy and non-repudiation. Salted hashes are described as a way to prevent password cracking via rainbow tables or dictionaries. Asymmetric encryption solves the key distribution problem of symmetric encryption by using public/private key pairs.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Simple Build Tool
1. (Simple ?) Build Tool
David Galichet (@Xebia)
Jonathan Winandy
mercredi 23 novembre 2011
2. Schedule
• SBT basics
• installation and project setup,
• SBT usages,
• dependency management
• defining and using scopes, settings and tasks,
• cross building
• SBT demo
• using SBT
• using plugins
• writing plugin
mercredi 23 novembre 2011
3. A build system for Scala & Java
Applications
• compile Scala and Java code source
• create Artifacts
• manage dependencies (ivy)
• run tests
• extensible architecture (with plugins)
• integrated with Eclipse & Intellij
• plugin with Hudson/Jenkins
• ...
mercredi 23 novembre 2011
4. More than a build system
• run your applications,
• launch scala REPL,
• triggered execution,
• ...
mercredi 23 novembre 2011
5. SBT History
• Created by Mark Harrah
• First popular branch until 0.7.7
• A new popular (and incompatible) branch from 0.9 →
actually 0.11.1 (aka. XSBT)
mercredi 23 novembre 2011
6. SBT Setup
• Download the launch-sbt.jar (rename it xsbt-
launch.jar if version >= 0.9.x)
• Create a launch script (xsbt) available in your PATH :
java -Dfile.encoding=UTF8 -Xmx1536M -Xss1M -XX:
+CMSClassUnloadingEnabled -XX:MaxPermSize=256m -jar
`dirname $0`/xsbt-launch.jar "$@"
mercredi 23 novembre 2011
8. Using SBT
% xsbt
[info] Loading project definition from ...test/project
[info] Updating {file:/...test/project/}default-a285df...
[info] Done updating.
[info] Set current project to Test (in build
file:...test/)
>
mercredi 23 novembre 2011
9. Creating a simple project
• create project directory,
• create the src/ directory hierarchy (optional),
• create a build.sbt in project root.
• Or use the interactive mode !
> set name := "test"
> session save
This will automatically create the build.sbt.
mercredi 23 novembre 2011
10. First build definition
name := "test"
version := "0.1-SNAPSHOT"
scalaVersion := "2.9.1"
libraryDependencies += "org.specs2" %% "specs2" % "1.6.1" %
"test"
mercredi 23 novembre 2011
11. SBT basics
• name, version ... are Keys defining settings,
• settings are typed (String, Seq[String], Int, ModuleId ...)
• := is an assignation operator (override previous value)
• += is a modification operator (add a value to a sequence)
mercredi 23 novembre 2011
12. ModuleID
"org.specs2" %% "specs2" % "1.6.1" % "test"
============ ======== ======= ======
groupId artifact version configuration
String is implicitly converted to finally create a ModuleID.
mercredi 23 novembre 2011
13. Common commands
• reload
• clean
• compile
• test
• console
• console-project
• publish
• show
• set
• inspect
• project
• ...
mercredi 23 novembre 2011
14. Triggered execution
• use ~ to trigger task execution when code change (compile
or test for example),
• SBT uses incremental compilation → recompile only what is
needed.
mercredi 23 novembre 2011
15. Manual dependency management
All jar files in lib directory will be added to the classpath so
they will be available when using compile, test, run,
console ...
mercredi 23 novembre 2011
16. Automatic dependency
management
Dependencies are added to settings :
libraryDependencies += groupID % artifactID % revision
% configuration
where configuration (compile, test, run ...) is optional.
We can also encounter :
libraryDependencies += groupID %% artifactID %
revision
%% implies that SBT will use the right version according to
project scalaVersion (for example specs2_2.9.1)
mercredi 23 novembre 2011
17. Dependency management -
Resolvers
add a dependency resolver :
resolvers += "Repository name" at "http://the-repository/
releases"
add local maven repository to resolvers :
resolvers += "Local Mvn Repository" at
"file://"+Path.userHome.absolutePath+"/.m2/repository"
dependency explicit resolver :
libraryDependencies += "slinky" % "slinky" % "2.1" from
"http://slinky2.googlecode.com/svn/artifacts/2.1/
slinky.jar"
/! →use with caution, the explicit resolver doesn't appear in
the pom.xml when the artifact is published.
mercredi 23 novembre 2011
18. Dependency management - extra
configuration
extra configuration :
• intransitive() → disable transitivity for this
dependency,
• classifier(..) → add a classifier (ex : "jdk5"),
• exclude(groupId,artifactName) → exclude specified
artefact (since 0.11.1),
• excludeAll(..) → exclude based on exclusion rules
(since 0.11.1),
• ...
It's also possible to add Ivy configuration directly :
ivyXML := "<ivysettings>...</ivysettings>
mercredi 23 novembre 2011
19. Publish artifacts
To publish artifact locally (in ~/.ivy local repository) :
> publish-local
To define a nexus repository (and publish with publish) :
publishTo := Some("Scala Tools Nexus" at "http://
mydomain.org/content/repositories/releases/")
or an arbitrary location :
publishTo := Some(Resolver.file("file", new File
( "path/to/my/maven-repo/releases" )) )
To define nexus credentials :
credentials += Credentials(Path.userHome / ".ivy2" /
".credentials")
mercredi 23 novembre 2011
20. Cross building
To define all scala versions that we want to build for :
crossScalaVersions := Seq("2.8.0", "2.8.1", "2.9.1")
Then prefix the action we want to run with + :
> + package
> + publish
If some dependencies versions depends on scala version :
libraryDependencies <+= (scalaVersion) { sv =>
val vMap = Map("2.8.1" -> "0.5.2", "2.9.1" -> "0.6.3")
val v = vMap.getOrElse(sv, error("Unsupported ..."))
"org.scala" %% "mylib" % v
}
We can also use ++ <version> to temporarily switch version.
mercredi 23 novembre 2011
21. Full configuration
Defined in project/Build.scala :
import sbt._
import Keys._
object Test extends Build {
lazy val root = Project("root", file("."))
.settings(
name := "Test",
version := "0.1-SNAPSHOT",
...
)
}
mercredi 23 novembre 2011
22. Multi-projects build
• We can define a multi-projects in a full build description :
object Test extends Build {
lazy val root = Project(id = "root",
base = file(".")) aggregate(foo, bar)
lazy val foo = Project(id = "test-foo",
base = file("foo")) dependsOn(bar)
lazy val bar = Project(id = "test-bar",
base = file("bar"))
}
• Settings in all .sbt project description (i.e. foo/build.sbt)
will form the project definition and be scoped to the project,
• project/*.scala files in sub-project will be ignored,
• projects list projects and project <name> change project.
mercredi 23 novembre 2011
23. Scopes
We can define settings and use tasks on multiple axis :
• on full build,
• by project,
• by configuration,
• by task.
mercredi 23 novembre 2011
24. Define scope
Setting defined globally :
name := "test"
Setting restricted on specified configuration :
name in (Compile) := "test compile"
Inspect :
> show name
[info] test
> show compile:name
[info] test compile
mercredi 23 novembre 2011
26. Projects scope
• On a multi-project definition, some Settings are defined in
each project definition and assigned to project Scope. For
example :
> show version
[info] test-foo/*:version
[info] 0.7
[info] test-bar/*:version
[info] 0.9
[info] root/*:version
[info] 0.5
mercredi 23 novembre 2011
27. Build scope
To add a setting on build scope in build.sbt :
myKey in ThisBuild := value
and in Build.scala (out of project settings definition) :
override val settings += ( myKey := value )
then inspect :
{file:/home/hp/checkout/hello/}/*:myKey
mercredi 23 novembre 2011
28. Custom configuration
lazy val RunDebug = config("debug") extend(Runtime)
lazy val root = Project("root", file("."))
.configs( RunDebug )
.settings( inConfig(RunDebug)(Defaults.configTasks):_* )
.settings(
...
javaOptions in RunDebug ++= Seq("-Xdebug", "-
Xrunjdwp:...")
...
)
then use this configuration : debug:run
mercredi 23 novembre 2011
29. SBT settings
• defined by typed keys (SettingKey[T] ...),
• keys are defined in sbt.Keys (or in plugin, project, build
definition...),
• Keys have assignation methods that returns a Setting[T],
• each Setting[T] defines a transformation of SBT internal
build definition Map.
For example :
name := "test"
defines a transformation that returns the previous settings
Map with a new entry.
mercredi 23 novembre 2011
30. Kinds of Settings
The three kinds of Keys :
• SettingKey[T] → the Setting is evaluated once,
• TaskKey[T] → the Task is evaluated on each use;
Can create side effects,
• InputKey[T] → similar to Tasks but evaluation
depends on command line arguments.
When assignation method (:=, ~=, <<= ...) are used on a :
• SettingKey[T], it returns a Setting[T],
• TaskKey[T], it returns a Setting[Task[T]],
• InputKey[T], it returns a Setting[InputTask[T]].
mercredi 23 novembre 2011
31. Modify settings
• := is used to replace the setting value :
name := "test"
• += is used to add a value to a setting of type Seq[T] :
libraryDependencies += "org.specs2" %% "specs2" % "1.6.1"
% "test"
• ++= is used to add some values to a setting of type Seq[T] :
libraryDependencies ++= Seq("se.scalablesolutions.akka" %
"akka-actor" % "1.2", "se.scalablesolutions.akka" %
"akka-remote" % "1.2")
mercredi 23 novembre 2011
32. Modify settings - transform a
value
Sometimes we want to modify the value of an existing.
There's an operator for that :
name ~= { name => name.toUpperCase }
or more succinctly :
name ~= { _.toUpperCase }
mercredi 23 novembre 2011
33. Modify settings - use dependency
We want to compute a value based on other value(s) :
organization <<= name(_.toUpperCase)
that is equivalent to :
organization <<= name.apply { n => n.toUpperCase }
where SettingKey[T] <<= method is defined as :
<<=(app:Initialize[T]):Setting[T]
Setting[T] defines the apply method :
apply[U](f: T => U):Initialize[U]
apply transforms a Setting[T] to a Initialize[U].
mercredi 23 novembre 2011
34. Modify settings - use
dependencies
In case we want to rely on many dependencies :
name <<= (name, version)( _ + "-" + _ )
that is equivalent to :
name <<= (name, version).apply { (n, v) =>
n + "-" + v
}
Tuples (Initialize[T1],..., Initialize[T9]) are
implicitly converted to obtain the apply method.
mercredi 23 novembre 2011
35. Modify settings - use
dependencies
Add a value with dependencies to a Seq[File] :
cleanFiles <+= (name) { n => file(.) / (n + ".log") }
Add some values with dependencies to a Seq[File] :
unmanagedJars in Compile <++= baseDirectory map {
base => ((base / "myLibs") ** "*.jar").classpath
}
mercredi 23 novembre 2011
36. Modify settings - tasks with
dependencies
Setting[S] apply method returns a Initialize[T] but
for a TaskKey[T], <<= method expects a Initialize[Task
[T]]
The Setting[S] method map comes to the rescue :
map[T](f: S => T):Initialize[Task[T]]
We can set a SettingKey to a TaskKey :
taskKey <<= settingKey map identity
For multiple dependencies :
watchSources <+= (baseDirectory, name) map{(dir, n) =>
dir / "conf" / (n + ".properties")
}
mercredi 23 novembre 2011
37. Settings and tasks definition
A setting key definition sample:
val scalaVersion = SettingKey[String]("scala-version",
"The version of Scala used for building.")
A task key definition sample:
val clean = TaskKey[Unit]("clean", "Deletes files
produced by the build, such as generated sources, compiled
classes, and task caches.")
Here the clean task returns Unit when executed but can
have side effects (produced artefacts are deleted).
Most SBT tasks are defined in Default.scala.
mercredi 23 novembre 2011
38. Define your own tasks
Define a task that print and returns the current time :
val time = TaskKey[Date]("time", "returns current
time")
lazy val root = Project("test", file(".")).settings(
time := {
val now = new Date()
println("%s".format(now))
now
})
Usage :
> time
Wed Nov 16 13:55:38 CET 2011
Tasks unlike Settings are evaluated each time they are called.
mercredi 23 novembre 2011
39. Input tasks
• Similar to a Task but can take user input as parameter,
• SBT provides a powerful input parsing system (based on scala
parser combinators) and easy tab completion feature,
• Key defined in a way similar to SettingKey or TaskKey :
val release = InputKey[Unit]("release", "release
version")
• Defining it in settings :
release <<= InputTask(releaseParser)(releaseDef)
• Similar to a Command (a kind of tasks that is not defined in
Settings and with no return value).
mercredi 23 novembre 2011
40. Input tasks - input parser
• Input parser sample :
val releaseParser:Initialize[State => Parser[String]] =
(version) { (v:String) => {
val ReleaseExtractor(vMaj, vMin, vFix) = v
val major = token("major" ^^^ "%s.%s.%s".format
(vMaj.toInt + 1, vMin.toInt, vFix.toInt))
val minor = token("minor" ^^^ "%s.%s.%s".format
(vMaj.toInt, vMin.toInt + 1, vFix.toInt))
val fix = token("fix" ^^^ "%s.%s.%s".format
(vMaj.toInt, vMin.toInt, vFix.toInt + 1))
(state:State) => Space ~> (major | minor | fix)
}
}
mercredi 23 novembre 2011
41. Input tasks - task implementation
• Task input implementation :
val releaseDef = (nextVersion:TaskKey[String]) => {
(version, nextVersion) map { case (currentV, nextV) =>
println("next version : " + nextV)
val result = ("git tag " + currentV).lines_!.collect
{ case s:String if s.contains("fatal") => s }
if (result.mkString.isEmpty)
println(result.mkString)
else {
println("Release tagged ! Next one is " +
nextV.mkString)
// ...
}
}
mercredi 23 novembre 2011
42. Settings prevalence rules
Lowest
• Build and Project settings in .scala files,
• User global settings in ~/.sbt/*.sbt,
• Settings injected by plugins,
• Settings from .sbt files in the project,
• Settings from build definition project (i.e. project/
plugins.sbt)
Highest
prevalence
mercredi 23 novembre 2011
46. Extending SBT
• SBT can be extended using plugins,
• Plugins are new Settings/Tasks added to SBT,
• To add a plugin in the project or globally, add :
resolvers += Classpaths.typesafeResolver
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse" %
"1.4.0")
in your project/plugins.sbt or in ~/.sbt/plugins/
build.sbt
mercredi 23 novembre 2011
47. What is a plugin ?
• A plugin is an SBT project added as a dependency to the build
definition !
• Recursive nature of SBT :
/
build.sbt
project/ Project definition
Build.scala
plugins.sbt
project/ Build definition
Build.scala
...
• We can load build definition project with reload plugins
and go back to project with reload return.
mercredi 23 novembre 2011
48. Enhance build definition project
• To use a specific library in your project/Build.scala, you
can add the following in project/plugins.sbt (or
project/project/Build.scala) :
libraryDependencies += "net.databinder" %% "dispatch-
http" % "0.8.5"
• To test some build code snippets in a scala REPL :
> console-project
this will load all build dependencies.
mercredi 23 novembre 2011
49. Some powerful APIs
• IO operations with Path API,
• Invoking external process with process API,
• Input parsers and tab-completion for Tasks and Commands,
• Launcher to launch application without a local Scala
installation,
• All the power of Scala API ...
mercredi 23 novembre 2011
50. Finally...
Is Simple Build Tool Simple ?
• Limited key concepts to understand,
• A powerful API,
• Easy access to scala ecosystem power,
• Increasing number of plugins ...
mercredi 23 novembre 2011