The document provides an overview of the OpenNTF Domino API (ODA). It discusses what the ODA is, how to set it up and implement it, considerations for using it, and provides examples. Specifically:
- The ODA is an open source project that fills gaps and enhances Java capabilities for Domino. It consists of packages that can be installed as an OSGi plugin on Domino servers.
- Setup involves importing the ODA into an update site NSF, adding it to the server startup, and preparing Domino Designer.
- Other considerations include logging, transactions, views, documents, dates, and graphs.
- Examples shown include session handling, view handling,
Benchx: An XQuery benchmarking web application Andy Bunce
A system to record query performance of XQuery statements running on the BaseX http:basex.org XML database. It uses Angular on the client side and RESTXQ on the server.
t's time to think about content migration from TYPO3v4 to Phoenix. Therefore, last year a Google Summer of Code Project was initiated which was about figuring out whether XSLT-driven transformations of v4 XML contents to Phoenix-ready structures were feasible for production use. Until the T3DD12 workshop, a first prototype will be released which already brings some framework-alike structures, ready for testing on an existing v4-instance. Nonetheless, for covering the major part of all existing TYPO3v4 instances, there is still a lot of thoughts and development to be done. Thus, we want to recruit some interested developers and testers to the project for making the huge step forwards to Phoenix as nice and easy as it can possibly be. Therefore, we will introduce the audience to the problem, our general idea of solving it and provide some initial thoughts and directions to invest some work on.
Stream processing from single node to a clusterGal Marder
Building data pipelines shouldn't be so hard, you just need to choose the right tools for the task.
We will review Akka and Spark streaming, how they work and how to use them and when.
Abstract –
Spark 2 is here, while Spark has been the leading cluster computation framework for severl years, its second version takes Spark to new heights. In this seminar, we will go over Spark internals and learn the new concepts of Spark 2 to create better scalable big data applications.
Target Audience
Architects, Java/Scala developers, Big Data engineers, team leaders
Prerequisites
Java/Scala knowledge and SQL knowledge
Contents:
- Spark internals
- Architecture
- RDD
- Shuffle explained
- Dataset API
- Spark SQL
- Spark Streaming
Not Just ORM: Powerful Hibernate ORM Features and CapabilitiesBrett Meyer
DevNexus 2014
Hibernate has always revolved around data, ORM, and JPA. However, it’s much more than that. Hibernate has grown into a family of projects and capabilities, extending well beyond the traditional ORM/JPA space.
This talk will present powerful features provided both by Hibernate ORM, as well as third-party extensions. Some capabilities are brand new, while others are older-but-improved. Topics include multiple-tenancy, geographic data, auditing/versioning, sharding, OSGi, and integration with additional Hibernate projects. The talk will include live demonstrations.
Interactive learning analytics dashboards with ELK (Elasticsearch Logstash Ki...Andrii Vozniuk
My workshop at the Learning Analytics Summer Institute (LASI) 2016: http://lasi16.snola.es/#!/schedule/113
Educational data continues to grow in volume, velocity and variety. Making sense of the educational data in such conditions requires deployment and usage of appropriate scalable, real-time processing tools supporting a flexible data schema. Elasticsearch is one of the popular open-source tools meeting the enlisted requirements. Initially envisioned as a search engine capable of operating at scale and in real time, Elasticsearch is used by organisations such as Wikimedia and Github, which deal with big data on daily basis. In addition, Elasticsearch is used increasingly often as analytics platform thanks to its scalable architecture and expressive query language. Until recently, the exploitation of Elasticsearch for (learning) analytical purposes by practitioners was hindered by a high entrance barrier due to the complexity of the query language and the query specificities. This is currently changing with the ongoing development of Kibana, an open-source tool that allows to conduct analysis and build visualisations of Elasticsearch data through a graphical user interface. Kibana does not require the user to dive into technical details of the queries (although it is still possible) and hence makes big educational data visualisations accessible to regular users. The additional value of Kibana comes in play whenever several visualisations are combined on a single dashboard, enabling to use multiple coordinated views for an interactive explorative analysis. Both Elasticsearch and Kibana, together with Logstash are part of an analytics stack often referred to as ELK. Logstash supports data acquisition from multiple sources (including twitter, RSS, event logs) thanks to its rich set of available connectors. Custom connectors can be developed for case-specific sources. In addition to the mentioned values, ELK enables building analytics infrastructures decoupled from the learning platform, i.e., it allows to host separately the learning environment (with the analytics functionalities) and the data storage without affecting the end-user experience.
Spark real world use cases and optimizationsGal Marder
Using Spark for BigData became the standard in the industry. The internet is
full with "hello world" examples, but when your Spark job meets production all hell breaks loose. We will cover real world use cases, how they were designed, why they didn't work and how we made them run fast
DNUG 2014 Herbstkonferenz: Moderne Architektur - Hochskalierbare Anwendungsar...JRibbeck
Am praktischen Beispiel wird gezeigt, wie komplexe Anforderungen einer Webanwendung mit Hilfe eines JavaEE-Applikationsservers realisiert und das Frontend über den Domino-Server bereitgestellt werden.
Benchx: An XQuery benchmarking web application Andy Bunce
A system to record query performance of XQuery statements running on the BaseX http:basex.org XML database. It uses Angular on the client side and RESTXQ on the server.
t's time to think about content migration from TYPO3v4 to Phoenix. Therefore, last year a Google Summer of Code Project was initiated which was about figuring out whether XSLT-driven transformations of v4 XML contents to Phoenix-ready structures were feasible for production use. Until the T3DD12 workshop, a first prototype will be released which already brings some framework-alike structures, ready for testing on an existing v4-instance. Nonetheless, for covering the major part of all existing TYPO3v4 instances, there is still a lot of thoughts and development to be done. Thus, we want to recruit some interested developers and testers to the project for making the huge step forwards to Phoenix as nice and easy as it can possibly be. Therefore, we will introduce the audience to the problem, our general idea of solving it and provide some initial thoughts and directions to invest some work on.
Stream processing from single node to a clusterGal Marder
Building data pipelines shouldn't be so hard, you just need to choose the right tools for the task.
We will review Akka and Spark streaming, how they work and how to use them and when.
Abstract –
Spark 2 is here, while Spark has been the leading cluster computation framework for severl years, its second version takes Spark to new heights. In this seminar, we will go over Spark internals and learn the new concepts of Spark 2 to create better scalable big data applications.
Target Audience
Architects, Java/Scala developers, Big Data engineers, team leaders
Prerequisites
Java/Scala knowledge and SQL knowledge
Contents:
- Spark internals
- Architecture
- RDD
- Shuffle explained
- Dataset API
- Spark SQL
- Spark Streaming
Not Just ORM: Powerful Hibernate ORM Features and CapabilitiesBrett Meyer
DevNexus 2014
Hibernate has always revolved around data, ORM, and JPA. However, it’s much more than that. Hibernate has grown into a family of projects and capabilities, extending well beyond the traditional ORM/JPA space.
This talk will present powerful features provided both by Hibernate ORM, as well as third-party extensions. Some capabilities are brand new, while others are older-but-improved. Topics include multiple-tenancy, geographic data, auditing/versioning, sharding, OSGi, and integration with additional Hibernate projects. The talk will include live demonstrations.
Interactive learning analytics dashboards with ELK (Elasticsearch Logstash Ki...Andrii Vozniuk
My workshop at the Learning Analytics Summer Institute (LASI) 2016: http://lasi16.snola.es/#!/schedule/113
Educational data continues to grow in volume, velocity and variety. Making sense of the educational data in such conditions requires deployment and usage of appropriate scalable, real-time processing tools supporting a flexible data schema. Elasticsearch is one of the popular open-source tools meeting the enlisted requirements. Initially envisioned as a search engine capable of operating at scale and in real time, Elasticsearch is used by organisations such as Wikimedia and Github, which deal with big data on daily basis. In addition, Elasticsearch is used increasingly often as analytics platform thanks to its scalable architecture and expressive query language. Until recently, the exploitation of Elasticsearch for (learning) analytical purposes by practitioners was hindered by a high entrance barrier due to the complexity of the query language and the query specificities. This is currently changing with the ongoing development of Kibana, an open-source tool that allows to conduct analysis and build visualisations of Elasticsearch data through a graphical user interface. Kibana does not require the user to dive into technical details of the queries (although it is still possible) and hence makes big educational data visualisations accessible to regular users. The additional value of Kibana comes in play whenever several visualisations are combined on a single dashboard, enabling to use multiple coordinated views for an interactive explorative analysis. Both Elasticsearch and Kibana, together with Logstash are part of an analytics stack often referred to as ELK. Logstash supports data acquisition from multiple sources (including twitter, RSS, event logs) thanks to its rich set of available connectors. Custom connectors can be developed for case-specific sources. In addition to the mentioned values, ELK enables building analytics infrastructures decoupled from the learning platform, i.e., it allows to host separately the learning environment (with the analytics functionalities) and the data storage without affecting the end-user experience.
Spark real world use cases and optimizationsGal Marder
Using Spark for BigData became the standard in the industry. The internet is
full with "hello world" examples, but when your Spark job meets production all hell breaks loose. We will cover real world use cases, how they were designed, why they didn't work and how we made them run fast
DNUG 2014 Herbstkonferenz: Moderne Architektur - Hochskalierbare Anwendungsar...JRibbeck
Am praktischen Beispiel wird gezeigt, wie komplexe Anforderungen einer Webanwendung mit Hilfe eines JavaEE-Applikationsservers realisiert und das Frontend über den Domino-Server bereitgestellt werden.
Transformations - a TLCC & Teamstudio WebinarOliver Busse
These are my slides I used during the webinar hosted by TLCC & Teamstudio on April 21, 2015.
It's the session I held at Engage.UG March 31, 2015 and at ICS.UG March 26, 2015
Java & Notes - Mit Eclipse neue Features für Notes entwickeln | C.HabermuellerChristian Habermueller
IBM Lotus Domino Notes unterstützt Java als Programmiersprache. Lernen Sie anhand eines Praxisbeispiels, wie Sie ein vollkommen neues Features mit Hilfe von IBM Eclipse
in Java für IBM Lotus Domino Notes entwicklen. So sind Sie in der Lage, IBM Lotus Domino Notes um jede beliebig vorstellbare Funktionalität zu erweitern.
What if I were to say to you that applications do not exist? That what you are seeing today are merely the remnants of a concept created by an ancient IT civilization. An era ruled by large mainframes and antiquated software. It is now 2015 and advances in social software, cognitive computing, and graph databases have the potential to deliver new ways in which computer devices can be used to integrate their users with important business processes. We now live in an era in which we expect to quickly find relevant information regardless of the data silo in which it is stored. An era in which we demand to be connected to all the relevant information capable of empowering us to do our job. In this session we will explore a world in which Domino acts as a data server instead of an application server. A world in which NSF data silos form part of a business graph that is then augmented with social data. A world without applications.
Presentation: Engage.ug March 2015 by Peter Presnell
Many organizations are now investing in the development of web interfaces for their existing (and new) Notes/Domino applications. Now is the perfect time to start paying down on the technical debt we have accumulated to ensure that we don’t pass on an unsustainable software development deficit to those that will follow. With this presentation we will challenge the notion that trickle down development based on XPages is the next logical step in that evolution. While XPages still has a place in the world of modern Domino development we will explore new approaches and alternative technologies that have the potential to deliver a brighter prosperous.
Running Airflow Workflows as ETL Processes on Hadoopclairvoyantllc
While working with Hadoop, you'll eventually encounter the need to schedule and run workflows to perform various operations like ingesting data or performing ETL. There are a number of tools available to assist you with this type of requirement and one such tool that we at Clairvoyant have been looking to use is Apache Airflow. Apache Airflow is an Apache Incubator project that allows you to programmatically create workflows through a python script. This provides a flexible and effective way to design your workflows with little code and setup. In this talk, we will discuss Apache Airflow and how we at Clairvoyant have utilized it for ETL pipelines on Hadoop.
[Elio Struyf] We all have these daily tasks that can be automated. Like checking if the backup job of your site completed, or looking how many times a file has been accessed, etc. These kinds of tasks are great to be automated by an Azure Functions. In this session, you will get an overview of what Azure Functions can do for you. With some demos, we go step by step through the creation, debugging and deployment process of these functions.
Abstract:
A apresentação centra-se na temática de ter forma de controlar, versionar e actualizar toda a parte de Base de Dados de um projecto. Estamos a falar, desde a produção de modelos ER, a versionamento de scripts, passando pelo deploy dos mesmos e terminado na documentação. A apresentação conta ainda com uma breve demonstração do uso da ferramenta Flyway para versionar e controlar a execução de scripts nos diversos ambientes de um projecto.
Sobre o Nuno Alves:
Chamo-me Nuno Alves nascido em Coimbra, Portugal e vivi maioritariamente em Leiria. Licenciado em Engenharia Informática na ESTG-IPLeiria (Escola Superior de Tecnologia e Gestão) onde o gosto por dados e bases de dados se começou a desenvolver. Daí, profissionalmente a minha área de actuação ser em torno de bases de dados e infra-estruturas. Tenho cerca de 10 anos de experiência repartidos pelas áreas Financeira, Seguros, Governo, Militar em tecnologias que vão desde Oracle, PostgreSQL, MSSQLServer a DB2.
Introducing 3 FREE Smart solutions for SQL Server (Adi Sapir, Docco Labs)
As Database experts, we work with SQL Server Databases on a daily basis. We face the same problems every SQL Administrator and/or developer does. And – we spend our time writing solutions for these problems! In this session Adi will introduce the following 3, totally FREE solutions:
· ClipTable – A revolutionary new *anything* to SQL Table importer
· Database File Explorer – a much easier way to explore our database->filegroups->files->storage mapping
· Log Table Viewer – a complete client/server logger solution for SQL Server
Save 10% off ANY FITC event with discount code 'slideshare'
See our upcoming events at www.fitc.ca
Node.js: The What, The How and The When
with Richard Nieuwenhuis
ShinySRAdb: an R package using shiny to wrap the SRAdb Bioconductor packageSean Davis
The Sequence Read Archive (SRA) is hosted at the National Center for Biomedical Informatics at the National Institutes of Health. These slides showcase Olivia Zhang's summer internship work to wrap a Bioconductor package, SRAdb, using the the shiny R package.
Node.js 101 with Rami Sayar
Presented on September 18 2014 at
FITC's Web Unleashed Toronto 2014 Conference
More info at www.fitc.ca
OVERVIEW
Node.js is a runtime environment and library for running JavaScript applications outside the browser. Node.js is mostly used to run real-time server applications and shines through its performance using non-blocking I/O and asynchronous events. This talk will introduce you to Node.js by showcasing the environment and its two most popular libraries: express and socket.io.
TARGET AUDIENCE
Beginner web developers
ASSUMED AUDIENCE KNOWLEDGE
Working knowledge of JavaScript and HTML5.
OBJECTIVE
Learn how to build a chat engine using Node.js and WebSockets.
FIVE THINGS AUDIENCE MEMBERS WILL LEARN
Node.js environment and basics
Node Package Manager overview
Web Framework, express, basics
WebSockets and Socket.io basics
Building a chat engine using Node.js
Outlook becomes a Team Player - with a clever add-inOliver Busse
With Mailissa the work with mails in MS Outlook becomes more efficient and flexible. This session was held during the Collabsphere 2020 Live event on Oct 29, 2020.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
1. Boost your JAVA Code with the OpenNTF API
Oliver Busse
We4IT GmbH, Germany
March 17, 2016
2. Oliver Busse
• „Bleeding Yellow“ since R4.5
• Software Architect at We4IT
• Member of the development team of
Aveedo® Application Framework
• IBM Champion for ICS in 2015 + 2016
• OpenNTF Member Director
• XPages Advocate
• IBM Bluemix curious
@zeromancer1972
www.oliverbusse.com
3. Agenda
• What is the OpenNTF Domino API?
• Setup and Implementation
• Other Considerations
• Tons of examples
5. What is the OpenNTF Domino API?
• It‘s an open source project on OpenNTF
• It‘s was started in April 2013
• It‘s maintained by generous developers you may know
• It fills the gaps and gives the power you always wanted in Java for
Domino
• It‘s often refered to as „ODA“
6. What is the OpenNTF Domino API? (cont‘d)
• The ODA consists of several packages
• core
• formula
• rest
• xsp
• …
• It‘s an OSGi plugin
• It‘s designed for running on the Domino server (9.0.x+)
• It‘s designed for XPages (Java, SSJS) and Plugins
• It can‘t be used in Java Agents
7. Key developers of the ODA
• Nathan T. Freeman
• Paul S. Withers
• Jesse Gallagher
• Roland Praml
• Martin Jinoch
• René Winkelmeyer
• Tim Tripcony (never forgotten)
9. Resources
• Grab it from OpenNTF (recommended)
• http://www.openntf.org/main.nsf/project.xsp?r=project/OpenNTF%20Domino%20
API
• Grab it from the Git-Repo
• https://github.com/OpenNTF/org.openntf.domino
• Grab it from the OpenNTF Stash
• https://stash.openntf.org/projects/ODA
10. System Logging
• Since the ODA is an OSGi plugin you can install it via the update site
mechanism
• It runs as an extension to the XSP runtime on the HTTP server JVM
• It comes with it‘s own logger
11. Setup: prepare the server
• Set the signer of the NSF as „Sign or run…“ in server
document‘s security section
12. Setup: prepare the updatesite
• Create an updatesite NSF
• Name it whatever you
want
• Make sure you set ACL to
let the server READ
documents
13. Setup: import ODA into update site
• Find the site.xml file to import it as a local update site
into your NSF
• After import goto „Actions, Sign all Content“
14. Setup: add the ODA to server startup
• Add a new line to your server‘s notes.ini file
• edit file manually or
• use a configuration setting (prefered)
• OSGI_HTTP_DYNAMIC_BUNDLES=updatesite.nsf
15. Setup: add the ODA to server startup
• This is what you should see when the server starts:
HTTP JVM: CLFAD0330I: NSF Based plugins are being installed
in the OSGi runtime. For more information please consult the
log
• Check the plugins with
– tell http osgi ss openntf
16. Setup: prepare Domino Designer
• Open DDE‘s preferences
• Goto „Domino Designer“ section
• Activate „Enable Eclipse plug-in install“
• Open the update site NSF you just created
• Goto „Actions, Show URLs“
• Copy one of the two URLs to clipboard
• Goto „File, Application, Install“
• Choose „Search for new features to install“
• On the next screen „Add (a) Remote Location“
• Enter a name for it and paste the URL in the clipboard
• On the next screen check the ODA entry and click next/yes if you are asked to
18. Other Considerations
• ODA utilizes the OpenLog project
• XspOpenLogUtil.logEvent(…)
• XspOpenLogUtil.logError(…)
• Get familiar with the OpenLog project from OpenNTF
• Create a new OpenLog.nsf file in your server‘s root (if you haven‘t
already)
28. Safe lines of code by using new methods
• New creation methods
• Database.createDocument(String, Object, …)
• Database.createDocument(HashMap fields)
• Alternatives to replaceItemValue
• Document.put(String field, Object o)
• Document.putAll(HashMap fields)
• Alternatives to getItemValueXXX
• Document.get(Object o) // document acts like a Map<?>
• Document.getItemValue(String field, Class type)
29. getItemValue: what you are used to
• getItemValue returns a Vector
• Vectors are not type save
• editor / compiler complains non-type-safety
• they can contain „anything“
• you have to check what is inside
• if the item does not exist you are running into trouble…
30. getItemValue: what you can do now
• cast to a type of your choice
• ArrayList<?> values = doc.getItemValue(„foo", ArrayList.class);
• forget type safety
• define your own!
• a non existing item is returned as null, not as empty Vector
• can be handled
34. Transactions
• ODA adds transactional capabilities to your Notes data
• You can modify documents without saving them
individually (e.g. in a loop)
• You can also rollback every modification if you need to
(e.g. when you run into an error)
35. Transactions (cont‘d)
• Create a new DatabaseTransaction object from the database
• DatabaseTransaction txn = db.startTransaction();
• Perform your modifications
• Decide whether to commit or rollback
• txn.commit();
• txn.rollback();
37. Xots
• Xots = XPages OSGi Tasklet Service
• It‘s the extended version of DOTS (Domino Tasklet Service)
• Use cases
• Can be coded inside the NSF, no plugin project needed
• Multi-threaded tasks like Runnable, but you can return values
• Bulk execution of time consuming code
• very new feature (alpha)
38. Xots (cont‘d)
• Advantages
• More granular time and event triggering than in Agents
• Can run with server-side permissions
• Runs in a shared container (JVM) unlikely of an Agent which runs in a dedicated
JVM
• you can exchange data between tasklets
• It‘s coded in a plain Java class and not in an Agent design element
• You can use SCM systems
39. Xots (cont‘d)
• Core elements of tasklet
• Interface Callable<?>
• Interface Future<?>
• get() method to get the return value(s)
• only if you are interested in a return value
• Class Xots from the ODA
• submit() method to create a tasklet
• schedule() methods to create a periodic tasklet
• use the PeriodicScheduler!
41. Graph DB
A graph database, also called a graph-oriented database, is a type of NoSQL database
that uses graph theory to store, map and query relationships.
A graph database is essentially a collection of nodes and edges. Each node represents
an entity (such as a person or business) and each edge represents a connection or
relationship between two nodes.
http://whatis.techtarget.com/definition/graph-database
42. Graphs – terminology
• Vertices (Nodes)
• Properties (Key-Value pairs)
• Edges
• Connections, Relations between Vertices
• ElementStores
• for us: NSF databases
• MetaverseIDs
• Replica + UNID (hashed)
• internal use only (don‘t care about them)
43. Graph DB – in Domino?
• Vertices and Edges are stored as Documents
• The data container is a NSF
• The ElementStore defines the filepath to the NSF
• An ElementStore can hold different types of Vertices
• Usually you create one ElementStore for each Vertice type