This document contains an agenda for a two-day Accelrys software development event with over 50 registered attendees from partner companies like BT, Discngine, and IBM. On day one, there are sessions on new features and improvements to various Accelrys products, like Direct and PPChem. There are also sessions on deploying products like Discoverant and using collections. Day two focuses on roadmaps for products like LIMS and ELN. Additional sessions discuss maximizing performance, deployment strategies, and integration. The event aims to provide information to help attendees improve their ability to use Accelrys products.
Relay management ELVIS event Tuomas VanninenFingrid Oyj
In 2008, Fingrid launched a project, the aim of which was to build a new information system that supported asset and operation management and was based on product-based solutions. This new data system that supports asset and operation management goes by the name of ELVIS (an acronym for ELectricity Verkko Information System).
ELVIS substation maintenance Hannu HätönenFingrid Oyj
In 2008, Fingrid launched a project, the aim of which was to build a new information system that supported asset and operation management and was based on product-based solutions. This new data system that supports asset and operation management goes by the name of ELVIS (an acronym for ELectricity Verkko Information System).
(EN) Success Story openpack - Carl Eichhorn: Automated energy and resource ma...CIPA GmbH
Corrugated board manufacturer Carl Eichhorn captures energy data through post-crosslinking in an automated way. The platform for the corrugated board industry, openpack, makes it possible.
More information about the digital platform at www.openpack.net.
ECLIPSE for academia (CubeSat) and Career Opportunities UKSEDSSapienzaConsulting
On the 4th of March at the UKSEDS conference Sapienza Consulting gave a presentation regarding space career opportunities, as well as project management of your space projects using the ECLIPSE Software Suite
Relay management ELVIS event Tuomas VanninenFingrid Oyj
In 2008, Fingrid launched a project, the aim of which was to build a new information system that supported asset and operation management and was based on product-based solutions. This new data system that supports asset and operation management goes by the name of ELVIS (an acronym for ELectricity Verkko Information System).
ELVIS substation maintenance Hannu HätönenFingrid Oyj
In 2008, Fingrid launched a project, the aim of which was to build a new information system that supported asset and operation management and was based on product-based solutions. This new data system that supports asset and operation management goes by the name of ELVIS (an acronym for ELectricity Verkko Information System).
(EN) Success Story openpack - Carl Eichhorn: Automated energy and resource ma...CIPA GmbH
Corrugated board manufacturer Carl Eichhorn captures energy data through post-crosslinking in an automated way. The platform for the corrugated board industry, openpack, makes it possible.
More information about the digital platform at www.openpack.net.
ECLIPSE for academia (CubeSat) and Career Opportunities UKSEDSSapienzaConsulting
On the 4th of March at the UKSEDS conference Sapienza Consulting gave a presentation regarding space career opportunities, as well as project management of your space projects using the ECLIPSE Software Suite
[Capella Days 2020] Innovating with MBSE – Medical Device ExampleObeo
by Tony Komar (Siemens)
Sustained innovation is the goal of many development organizations. Sustaining innovation is depicted on an Innovation as matrix as the result of a well-defined problem, and a well-defined domain definition. An example will be presented how an MBSE tool, based on open-source tool Capella, can enhance both the problem definition and domain definition of a ventilator. It will show how the MBSE tool enhanced the understanding of the problem, and how that understanding can lead to an innovative solution.
What Does an Exec Need to About Architecture and WhyJesse Anderson
My Strata Singapore 2017 talk on Big Data Architecture. I focus on the business reasons and actual problems that the latest Big Data technologies solve.
Spot Lets NetApp Get the Most Out of the CloudNetApp
Prior to NetApp acquiring Spot.io, two of its IT teams had adopted Spot in their operations: Product Engineering for Cloud Volumes ONTAP test automation and NetApp IT for corporate business applications. Check out the results in this infographic.
Overview of Blue Medora - New Relic Plugin for HP Rack ServersBlue Medora
Overview of Blue Medora's New Relic Plugin for HP Rack Servers. The Blue Medora New Relic Plugin for HP Rack Servers provides support for New Relic Plugins as well as New Relic Insights.
[SECSI 2018] CONAMO - Continuous Athlete Monitoring through a Real-Time Senso...Gilles Vandewiele
Sports and data go hand in hand. In recent years, we have seen an increasing importance of using data to take strategic decisions at professional events, training for amateurs, etc. This monitoring is especially relevant in cycling, a sport where the innovations have often stemmed from Flanders. Many cyclists are riding with a plethora of sensors measuring aspects such as heart rate, power, location, and speed. The popularity of apps like Strava and Runkeeper shows the importance of sports data monitoring and the willingness to pay by users.
The main limitation of existing cycling training apps is that almost all data reporting and analysis happens offline as a post-processing step. Real-time collection and analysis can therefore bring important innovations in the cycling world, especially for amateur cyclists. Another important cycling market is mass amateur cycling events such as the gran fondos (e.g., Tour of Flanders) for amateurs or recreational cycling holidays. The former attract ten thousands of participants, while the latter is one of the booming markets for the tourism sector. At these events, the lack of real-time data and ways to interact with peers makes the cycling itself an individual experience. real-time updates about friends’ position and performance can enable friendly competitions, presenting a business opportunity in a booming cycling event market where added experience is the key differentiator.
Therefore, the imec ICON-project CONtinuous Athlete MOnitoring (CONAMO) aims to provide a real-time data-driven cycling experience, both during training and at mass amateur events. Two major technological innovations are researched in order to achieve this goal:
* Mobile network layer based on a long range IEEE 802.15.4g wireless technology with the bikes as nodes to allow communication during mass amateur cycling events. This allows sending real-time sensor information from cyclist to cyclist up to the sync (e.g. a team car). Due to cycling events passing through remote terrain, 4G coverage is often insufficient and is therefore augmented by the multi-hop bike network.
* Data analysis layer that is able to interpret the data in real-time to improve how amateur cyclists experience the race and training. This analysis can occur both at the preparation of the event and at the mass amateur cycling event itself.
This is complemented by a living lab approach in which (i) through an ideation process, the actual prototype is further refined and (ii) a post-product validation of the project’s results to validate the willingness to pay and provide insights in the concrete business opportunities. The cycling training and mass amateur cycling event use cases are being used throughout the project to validate and evaluate the CONAMO technical contributions and developed into two complementary proof of concept demonstrators.
Benefits of Intel Technologies for Engineering SimulationAnsys
This presentation gives the perspective of an ISV (independent software vendor) on the usage of the latest Intel technologies for engineering simulation. It starts off with introducing ANSYS and why our customers have a continually increasing need for higher computing performance so that they can run faster, bigger, and more simulations. In order to meet this continually growing compute demand, it demonstrates how we have worked closely with Intel to optimize our software at different scales of parallelism (from workstation, server-based clusters to supercomputers). Key strategies to enable efficient parallel execution are discussed, and recent examples of the value of software optimization are shown.
Modern IoT operations can drive digital transformation by analyzing the unprecedented amounts of data generated from devices and sensors in real-time.
Apache Spark is a widely used stream processing engine for real-time IoT applications. Spark streaming offers a rich set of APIs in the areas of ingestion, cloud integration, multi-source joins, blending streams with static data, time-window aggregations, transformations, data cleansing, and strong support for machine learning and predictive analytics.
Join Anand Venugopal, AVP & Business Head, StreamAnalytix and Sameer Bhide, Senior Solutions Architect, StreamAnalytix to learn about the rapid development and operationalization of real-time IoT applications covering an end-to-end flow of ingest, insight, action, and feedback.
The webinar will cover the following:
Generic IoT application blueprint
Case studies on IoT applications built on Apache Spark – connected car and industrial IoT
Demonstration of an easy, visual approach to building IoT Spark apps
[SriusCon 2020] Sirius to the Web with Obeo Cloud PlatformObeo
Obeo Cloud Platform (OCP) is the Cloud-based solution developed by Obeo for deploying modeling tools on the web.
With OCP, modeling tools developed with Sirius can be installed on a Cloud server and are rendered in a web browser.
OCP Modeler is not just a revamping of Sirius in the web! Relying on a modern technical stack, it implements a new UX design to offer an experience adapted to web usages.
We will present the architecture of the solution and how it positions with Sirius. Then we will show a demo of this modeling environment capabilities and give an overview of the roadmap.
Stéphane Bégaudeau, Obeo
Stéphane Bégaudeau graduated from the Nantes University of Sciences and Technology and is currently working as an Eclipse Modeling consultant at Obeo in France.
Mélanie Bats, Obeo
Mélanie Bats works as CTO at Obeo. In my daily work, I am mainly focused on managing the R&D team, creating products based on our own open source technologies. I am used to work in the development of modeling tools with Sirius like UML Designer. I am committer for the EEF and the Sirius projects. I am also involved in the Eclipse community as being the Eclipse Planning Council chair. I am also a free software activist who has organized and participated in free software events in the Toulouse area.
The use of R in Predictive Maintenance: A use case with TRUMPF Laser GmbHeoda GmbH
The buzz for industry 4.0 continues – digitalizing business processes is one of the main aims of companiesin the 21st century. One topic gains particular importance: predictive maintenance. Enterprises use thismethod in order to cut production and maintenance costs and to increase reliability.Being able to predict machine failures, performance drops or quality deterioration is a huge benefit forcompanies. With this knowledge, maintenance and failure costs can be reduced and optimized.With the help of R and its massive community, analysts can apply the best algorithms and methods forpredictive maintenance. When a good analytic model for predictive maintenance has been found, companiesare challenged to implement them in their own environments and workflows. Especially regarding the workflowacross different departments, it is necessary to find an appropriate solution which is capable of interdisciplinarywork, as well.My talk will show how this challenge was solved for TRUMPF Laser GmbH, a subsidiary of TRUMPF, aworld-leading high-technology company which offers production solutions in the machine tool, laser andelectronic sectors. I would like to share my experience with R and predictive maintenance in a real-worldindustry scenario and show the audience how to automate R code and visualize it in a front-end solution forall departments involved.
Talk about how we at Expedia are trying get to greater observability into stack using our opensourced distributed tracing and analysis system Haystack.
Textual and model requirements: working together towards success
While Eclipse Capella is a model-based systems engineering environment,
it’s also clear that it enables the creation of model requirements
that complement textual requirements.
When dealing with both, textual and model requirements,
ensuring consistency and completeness are key to the final success of our systems.
Notebooks @ Netflix: From analytics to engineering with Jupyter notebooksMichelle Ufford
Slides from JupyterCon 2018 in NYC on 8/23/2018.
Notebooks have moved beyond a niche solution at Netflix; they are now the critical path for how everyone runs jobs against the company’s data platform. From creating original content to delivering bufferless streaming, Netflix relies on notebooks to inform decisions and fuel experiments across the company. Netflix also uses notebooks to power its machine learning infrastructure and run over 150,000 jobs against its 100 PB cloud-based data warehouse every day. The goal is to deliver a compelling notebooks experience that simplifies end-to-end workflows for every type of user. To enable this, Netflix is investing deeply in notebook infrastructure and open source projects such as nteract.
In this talk, Michelle Ufford and Kyle Kelley share interesting ways Netflix uses data and some of the big bets the company is making on notebooks. Topics will include architecture, kernels, UIs, and Netflix’s open source collaborations with projects such as Jupyter, nteract, pandas, and Spark.
Overview of Blue Medora - New Relic Plugin for HP Blade ServersBlue Medora
Overview of Blue Medora's New Relic Plugin for HP Blade Servers. The Blue Medora New Relic Plugin for HP Blade Servers provides support for New Relic Plugins as well as New Relic Insights.
This general session will introduce you to Accelrys Tech Summit (ATS4). The welcome overview will discuss what the audience can expect during the Tech Summit as well as how Accelrys plans on following up with some of the key discussion points and topics, and will inform the audience on how they can best leverage the tracks, sessions, and Accelrys employees to make the most of their time.
The Introduction to Scientific Innovation Lifecycle Management will provide the audience with context when attending the different technical sessions on how our solutions and technologies fit together to uniquely address the challenges faced by our customers worldwide and in different industries.
Empowering SmartCloud APM - Predictive Insights and Analysis: A Use Case Scen...Prolifics
Abstract: You currently have SmartCloud APM. Now, it is time to empower and enhance APM with the power of SmartCloud Analytics - Predictive Insights and Log Analysis. We will be discussing an actual customer use case scenario involving WebSphere Application Server and MQ, understand how to integrate these components and how to use the integrated solution to make problem determination and root cause analysis significantly easier and faster. We will be doing a real-time demo as well to see the integrated solution in action.
[Capella Days 2020] Innovating with MBSE – Medical Device ExampleObeo
by Tony Komar (Siemens)
Sustained innovation is the goal of many development organizations. Sustaining innovation is depicted on an Innovation as matrix as the result of a well-defined problem, and a well-defined domain definition. An example will be presented how an MBSE tool, based on open-source tool Capella, can enhance both the problem definition and domain definition of a ventilator. It will show how the MBSE tool enhanced the understanding of the problem, and how that understanding can lead to an innovative solution.
What Does an Exec Need to About Architecture and WhyJesse Anderson
My Strata Singapore 2017 talk on Big Data Architecture. I focus on the business reasons and actual problems that the latest Big Data technologies solve.
Spot Lets NetApp Get the Most Out of the CloudNetApp
Prior to NetApp acquiring Spot.io, two of its IT teams had adopted Spot in their operations: Product Engineering for Cloud Volumes ONTAP test automation and NetApp IT for corporate business applications. Check out the results in this infographic.
Overview of Blue Medora - New Relic Plugin for HP Rack ServersBlue Medora
Overview of Blue Medora's New Relic Plugin for HP Rack Servers. The Blue Medora New Relic Plugin for HP Rack Servers provides support for New Relic Plugins as well as New Relic Insights.
[SECSI 2018] CONAMO - Continuous Athlete Monitoring through a Real-Time Senso...Gilles Vandewiele
Sports and data go hand in hand. In recent years, we have seen an increasing importance of using data to take strategic decisions at professional events, training for amateurs, etc. This monitoring is especially relevant in cycling, a sport where the innovations have often stemmed from Flanders. Many cyclists are riding with a plethora of sensors measuring aspects such as heart rate, power, location, and speed. The popularity of apps like Strava and Runkeeper shows the importance of sports data monitoring and the willingness to pay by users.
The main limitation of existing cycling training apps is that almost all data reporting and analysis happens offline as a post-processing step. Real-time collection and analysis can therefore bring important innovations in the cycling world, especially for amateur cyclists. Another important cycling market is mass amateur cycling events such as the gran fondos (e.g., Tour of Flanders) for amateurs or recreational cycling holidays. The former attract ten thousands of participants, while the latter is one of the booming markets for the tourism sector. At these events, the lack of real-time data and ways to interact with peers makes the cycling itself an individual experience. real-time updates about friends’ position and performance can enable friendly competitions, presenting a business opportunity in a booming cycling event market where added experience is the key differentiator.
Therefore, the imec ICON-project CONtinuous Athlete MOnitoring (CONAMO) aims to provide a real-time data-driven cycling experience, both during training and at mass amateur events. Two major technological innovations are researched in order to achieve this goal:
* Mobile network layer based on a long range IEEE 802.15.4g wireless technology with the bikes as nodes to allow communication during mass amateur cycling events. This allows sending real-time sensor information from cyclist to cyclist up to the sync (e.g. a team car). Due to cycling events passing through remote terrain, 4G coverage is often insufficient and is therefore augmented by the multi-hop bike network.
* Data analysis layer that is able to interpret the data in real-time to improve how amateur cyclists experience the race and training. This analysis can occur both at the preparation of the event and at the mass amateur cycling event itself.
This is complemented by a living lab approach in which (i) through an ideation process, the actual prototype is further refined and (ii) a post-product validation of the project’s results to validate the willingness to pay and provide insights in the concrete business opportunities. The cycling training and mass amateur cycling event use cases are being used throughout the project to validate and evaluate the CONAMO technical contributions and developed into two complementary proof of concept demonstrators.
Benefits of Intel Technologies for Engineering SimulationAnsys
This presentation gives the perspective of an ISV (independent software vendor) on the usage of the latest Intel technologies for engineering simulation. It starts off with introducing ANSYS and why our customers have a continually increasing need for higher computing performance so that they can run faster, bigger, and more simulations. In order to meet this continually growing compute demand, it demonstrates how we have worked closely with Intel to optimize our software at different scales of parallelism (from workstation, server-based clusters to supercomputers). Key strategies to enable efficient parallel execution are discussed, and recent examples of the value of software optimization are shown.
Modern IoT operations can drive digital transformation by analyzing the unprecedented amounts of data generated from devices and sensors in real-time.
Apache Spark is a widely used stream processing engine for real-time IoT applications. Spark streaming offers a rich set of APIs in the areas of ingestion, cloud integration, multi-source joins, blending streams with static data, time-window aggregations, transformations, data cleansing, and strong support for machine learning and predictive analytics.
Join Anand Venugopal, AVP & Business Head, StreamAnalytix and Sameer Bhide, Senior Solutions Architect, StreamAnalytix to learn about the rapid development and operationalization of real-time IoT applications covering an end-to-end flow of ingest, insight, action, and feedback.
The webinar will cover the following:
Generic IoT application blueprint
Case studies on IoT applications built on Apache Spark – connected car and industrial IoT
Demonstration of an easy, visual approach to building IoT Spark apps
[SriusCon 2020] Sirius to the Web with Obeo Cloud PlatformObeo
Obeo Cloud Platform (OCP) is the Cloud-based solution developed by Obeo for deploying modeling tools on the web.
With OCP, modeling tools developed with Sirius can be installed on a Cloud server and are rendered in a web browser.
OCP Modeler is not just a revamping of Sirius in the web! Relying on a modern technical stack, it implements a new UX design to offer an experience adapted to web usages.
We will present the architecture of the solution and how it positions with Sirius. Then we will show a demo of this modeling environment capabilities and give an overview of the roadmap.
Stéphane Bégaudeau, Obeo
Stéphane Bégaudeau graduated from the Nantes University of Sciences and Technology and is currently working as an Eclipse Modeling consultant at Obeo in France.
Mélanie Bats, Obeo
Mélanie Bats works as CTO at Obeo. In my daily work, I am mainly focused on managing the R&D team, creating products based on our own open source technologies. I am used to work in the development of modeling tools with Sirius like UML Designer. I am committer for the EEF and the Sirius projects. I am also involved in the Eclipse community as being the Eclipse Planning Council chair. I am also a free software activist who has organized and participated in free software events in the Toulouse area.
The use of R in Predictive Maintenance: A use case with TRUMPF Laser GmbHeoda GmbH
The buzz for industry 4.0 continues – digitalizing business processes is one of the main aims of companiesin the 21st century. One topic gains particular importance: predictive maintenance. Enterprises use thismethod in order to cut production and maintenance costs and to increase reliability.Being able to predict machine failures, performance drops or quality deterioration is a huge benefit forcompanies. With this knowledge, maintenance and failure costs can be reduced and optimized.With the help of R and its massive community, analysts can apply the best algorithms and methods forpredictive maintenance. When a good analytic model for predictive maintenance has been found, companiesare challenged to implement them in their own environments and workflows. Especially regarding the workflowacross different departments, it is necessary to find an appropriate solution which is capable of interdisciplinarywork, as well.My talk will show how this challenge was solved for TRUMPF Laser GmbH, a subsidiary of TRUMPF, aworld-leading high-technology company which offers production solutions in the machine tool, laser andelectronic sectors. I would like to share my experience with R and predictive maintenance in a real-worldindustry scenario and show the audience how to automate R code and visualize it in a front-end solution forall departments involved.
Talk about how we at Expedia are trying get to greater observability into stack using our opensourced distributed tracing and analysis system Haystack.
Textual and model requirements: working together towards success
While Eclipse Capella is a model-based systems engineering environment,
it’s also clear that it enables the creation of model requirements
that complement textual requirements.
When dealing with both, textual and model requirements,
ensuring consistency and completeness are key to the final success of our systems.
Notebooks @ Netflix: From analytics to engineering with Jupyter notebooksMichelle Ufford
Slides from JupyterCon 2018 in NYC on 8/23/2018.
Notebooks have moved beyond a niche solution at Netflix; they are now the critical path for how everyone runs jobs against the company’s data platform. From creating original content to delivering bufferless streaming, Netflix relies on notebooks to inform decisions and fuel experiments across the company. Netflix also uses notebooks to power its machine learning infrastructure and run over 150,000 jobs against its 100 PB cloud-based data warehouse every day. The goal is to deliver a compelling notebooks experience that simplifies end-to-end workflows for every type of user. To enable this, Netflix is investing deeply in notebook infrastructure and open source projects such as nteract.
In this talk, Michelle Ufford and Kyle Kelley share interesting ways Netflix uses data and some of the big bets the company is making on notebooks. Topics will include architecture, kernels, UIs, and Netflix’s open source collaborations with projects such as Jupyter, nteract, pandas, and Spark.
Overview of Blue Medora - New Relic Plugin for HP Blade ServersBlue Medora
Overview of Blue Medora's New Relic Plugin for HP Blade Servers. The Blue Medora New Relic Plugin for HP Blade Servers provides support for New Relic Plugins as well as New Relic Insights.
This general session will introduce you to Accelrys Tech Summit (ATS4). The welcome overview will discuss what the audience can expect during the Tech Summit as well as how Accelrys plans on following up with some of the key discussion points and topics, and will inform the audience on how they can best leverage the tracks, sessions, and Accelrys employees to make the most of their time.
The Introduction to Scientific Innovation Lifecycle Management will provide the audience with context when attending the different technical sessions on how our solutions and technologies fit together to uniquely address the challenges faced by our customers worldwide and in different industries.
Empowering SmartCloud APM - Predictive Insights and Analysis: A Use Case Scen...Prolifics
Abstract: You currently have SmartCloud APM. Now, it is time to empower and enhance APM with the power of SmartCloud Analytics - Predictive Insights and Log Analysis. We will be discussing an actual customer use case scenario involving WebSphere Application Server and MQ, understand how to integrate these components and how to use the integrated solution to make problem determination and root cause analysis significantly easier and faster. We will be doing a real-time demo as well to see the integrated solution in action.
Big data. Small data. All data. You have access to an ever-expanding volume of data inside the walls of your business and out across the web. The potential in data is endless – from predicting election results to preventing the spread of epidemics. But how can you use it to your advantage to help move your business forward?
Drive a Data Culture within your organisation
Keynote include Ric Howe & Anthony Saxby
Revolutionary container based hybrid cloud solution for MLPlatform
Ness' data science platform, NextGenML, puts the entire machine learning process: modelling, execution and deployment in the hands of data science teams.
The entire paradigm approaches collaboration around AI/ML, being implemented with full respect for best practices and commitment to innovation.
Kubernetes (onPrem) + Docker, Azure Kubernetes Cluster (AKS), Nexus, Azure Container Registry(ACR), GlusterFS
Workflow
Argo->Kubeflow
DevOps
Helm, kSonnet, Kustomize,Azure DevOps
Code Management & CI/CD
Git, TeamCity, SonarQube, Jenkins
Security
MS Active Directory, Azure VPN, Dex (K8s) integrated with GitLab
Machine Learning
TensorFlow (model training, boarding, serving), Keras, Seldon
Storage (Azure)
Storage Gen1 & Gen2, Data Lake, File Storage
ETL (Azure)
Databricks, Spark on K8, Data Factory (ADF), HDInsight (Kafka and Spark), Service Bus (ASB)
Lambda functions & VMs, Cache for Redis
Monitoring and Logging
Graphana, Prometeus, GrayLog
A whitepaper is about Qubole on AWS provides end-to-end data lake services such as AWS infrastructure management, data management, continuous data engineering, analytics, & ML with zero administration
https://www.qubole.com/resources/white-papers/qubole-on-aws
Blending Supersonic, Subatomic Java with deep learning to perform object detection. Sounds interesting? Because it is! Then watch this session to learn how to create a microservice combining TensorFlow and Quarkus together into one executable using GraalVM native image, JNI, and Protobuf. With this, we detect objects in photos by returning labels, bounding boxes, and confidence scores. Additionally, we will touch on Open Data Hub, an AI/ML solution for OpenShift.
DATA @ NFLX (Tableau Conference 2014 Presentation)Blake Irvine
I presented this at a 2014 Tableau Conference session with Albert Wong.
Netflix relies on data to make decisions ranging from buying and recommending content, to improving the streaming experience on devices.
This presentation shares our Big Data analytics architecture and the tools used to make data accessible throughout our business, focusing on how Tableau fits into our organization and why it aligns well with our culture.
Achieve Sub-Second Analytics on Apache Kafka with Confluent and Implyconfluent
Presenters: Rachel Pedreschi, Senior Director, Solutions Engineering, Imply.io + Josh Treichel, Partner Solutions Architect, Confluent
Analytic pipelines running purely on batch processing systems can suffer from hours of data lag, resulting in accuracy issues with analysis and overall decision-making. Join us for a demo to learn how easy it is to integrate your Apache Kafka® streams in Apache Druid (incubating) to provide real-time insights into the data.
In this online talk, you’ll hear about ingesting your Kafka streams into Imply’s scalable analytic engine and gaining real-time insights via a modern user interface.
Register now to learn about:
-The benefits of combining a real-time streaming platform with a comprehensive analytics stack
-Building an analytics pipeline by integrating Confluent Platform and Imply
-How KSQL, streaming SQL for Kafka, can easily transform and filter streams of data in real time
-Querying and visualizing streaming data in Imply
-Practical ways to implement Confluent Platform and Imply to address common use cases such as analyzing network flows, collecting and monitoring IoT data and visualizing clickstream data
Confluent Platform, developed by the creators of Kafka, enables the ingest and processing of massive amounts of real-time event data. Imply, the complete analytics stack built on Druid, can ingest, store, query and visualize streaming data from Confluent Platform, enabling end-to-end real-time analytics. Together, Confluent and Imply can provide low latency data delivery, data transform, and data querying capabilities to power a range of use cases.
DevOps is powering the computing environments of tomorrow. When properly configured, the Splunk platform allows us to gain real-time visibility into the velocity, quality, and business impact of DevOps-driven application delivery across all roles, departments, process, and systems. Splunk can be used by DevOps practitioners to provide continuous integration/deployment and the real-time feedback to help the organization with their operational intelligence. Join us for a exciting talk about Splunk’s current approach to DevOps, and for examples of how Splunk is being used by customers today to transform DevOps initiatives.
We are a IT consulting company providing services to clients across geographies in Data Engineering, AI/ML, Cloud & DevOps, Platform Engineering, and Process Hyper automation.
In this presentation, we show how Data Reply helped an Austrian fintech customer to overcome previous performance limitations in their data analytics landscape, leverage real-time pipelines, break down monoliths, and foster a self-service data culture to enable new event-driven and business-critical use cases.
The next decade looks to be one of the most disruptive in the short history of IT. New computing and architecture paradigms, an exploding number of connected devices, and new organization models have all directly impacted what systems integration will look like in the years ahead. What challenges has the cloud introduced? How does a DevOps commitment impact my integration approach? What role does integration play in the "internet of things"? In this session, we'll talk about some of the mega-trends in the industry and how that may impact your approach to integration today and tomorrow.
How do industry trends like cloud computing, DevOps, internet-of-things, mobility, and wearables impact application integration? This presentation looks at some considerations for integration architects.
ScienceCloud: Collaborative Workflows in Biologics R&DBIOVIA
The life sciences industry has undergone dramatic changes and effective global collaboration has become a key success factor in this new age. BIOVIA is providing a hosted and comprehensive solution stack for externalized, collaborative research for pharma/biotech and CROs to address these new challenges. Recently we added the support for biologics data management and IP capture. In this talk we will present collaborative and comprehensive capabilities in antibody characterization and development: capabilities to analyze, annotate and predict developability as part of a framework that facilitates secure data sharing and collaboration.
(ATS6-PLAT09) Deploying Applications on load balanced AEP servers for high av...BIOVIA
Enterprise web applications and web services require a highly available and scalable environment. During this session, we’ll demonstrate how Accelrys Enterprise Platform 9 is deployed and configured within a load-balanced environment.
(ATS6-PLAT07) Managing AEP in an enterprise environmentBIOVIA
Accelrys Enterprise Platform use within an Enterprise environment spans from Power users of Pipeline Pilot to web applications and High Performance Computing. Managing the balance between productivity and enterprise policies can be tricky. This session will focus on exposing the tools and processes needed by administrators to enable users to be productive, yet allowing IT to remain in control.
How to get the maximum performance from your AEP server. This will discuss ways to improve execution time of short running jobs and how to properly configure the server depending on the expected number of users as well as the average size and duration of individual jobs. Included will be examples of making use of job pooling, Database connection sharing, and parallel subprotocol tuning. Determining when to make use of cluster, grid, or load balanced configurations along with memory and CPU sizing guidelines will also be discussed.
(ATS6-PLAT05) Security enhancements in AEP 9BIOVIA
In the latest version of the Accelrys Enterprise Platform we have streamlined how permissions are managed and added the capability for packages to define groups and permission sets. In addition, enhancements have been made to File Based Authentication, we have added support for enterprise authentication solutions like Kerberos and SAML and improved the usability of the Administration Portal. This session describes the new features and how to manage them through the Administration Portal.
The Query Service is the new platform solution for querying a variety of data sources. The goal of Query Service is that administrators can configure a metadata description of the data source that can then be used by end users without detailed knowledge of the underlying data source. This session explains how to configure Query Service data sources and use them with the RESTful API or component collection.
(ATS6-PLAT02) Accelrys Catalog and Protocol ValidationBIOVIA
Accelrys Catalog is a powerful new technology for creating an index of the protocols and components within your organization. You will learn about strategies for indexing and how search capabilities can be deployed to professional client and Web Port end users. You will also learn how to use this technology to find out about system usage to aid with system upgrades, server consolidations, and general system maintenance. The protocol validation capability in the admin portal allows administrators to created standard reports on server usage characteristics. You will learn how to report on violations of IT policies (e.g. around security), bad protocol authoring practices, or missing or incomplete protocol documentation. Developers will also learn how to extend and customize the rules used to create these reports.
(ATS6-PLAT01) Chemistry Harmonization: Bringing together the Direct 9 and Pip...BIOVIA
Pipeline Pilot Chemistry 9.0 is inheriting many new chemical representations from the Accelrys Direct data model. These include the support of the Self Contained Sequence Representation (SCSR) biologics, enhanced Markush structure representations, Markush homology groups, and Non Specific Structures (NONS). Also significantly enhanced is the support for Sgroups, in particular for polymers, mixtures, and formulations. Further, Pipeline Pilot depiction has been upgraded to support these enhancements and the stereochemical perception and ring perception capabilities were improved based on Direct.
The major benefit of these changes is that Direct and Pipeline Pilot now use the same data model. Searches carried out in Direct or in Pipeline Pilot will return identical results and both products will deliver identical structural perceptions. This session will give guidance on how these changes will impact your calculators and models and how you can plan for a smooth upgrade.
(ATS6-GS04) Performance Analysis of Accelrys Enterprise Platform 9.0 on IBM’s...BIOVIA
IBM recently completed a benchmarking study of several key modules of the Accelrys Enterprise Platform (AEP) 9.0, using IBM’s iDataPlex and General Parallel File System (GPFS). The results show that the performance of IO intensive workloads, such as Next Generation Sequencing (NGS), can be improved significantly by using GPFS. NGS workloads can also benefit from better load balancing implemented on AEP 9.0. Best practices for scalable IT solutions will also be discussed.
In this session, we will look at how the Accelrys Enterprise Platform facilitates the integration between Contur and HEOS to implement compound registration form Contur into HEOS without leaving the Contur experiment.
(ATS6-DEV09) Deep Dive into REST and SOAP Integration for Protocol AuthorsBIOVIA
Pipeline Pilot has always had a strong focus on integration to external resources. In AEP 9.0 we continue this tradition with a major overhaul of our SOAP Connector component as well as improved support for RESTful services. In this talk we will look at how to build protocols that access SOAP services especially secured services and review the approach to accessing RESTful services.
(ATS6-DEV08) Integrating Contur ELN with other systems using a RESTful APIBIOVIA
In order to enable easy integration between Contur ELN and other informatics systems a RESTful API has been developed. Data may be extracted from ELN experiments using GET calls, but external applications can also insert results directly into the ELN record. In particular the API can be used with Accelrys Enterprise Platform to create complex flows for resolving scientific problems. Such protocols may be launched from within the ELN client.
(ATS6-DEV07) Building widgets for ELN home pageBIOVIA
From a developer’s perspective, the Accelrys ELN Home Page is a container of widgets. It manages the layout of widgets, and handles the persistence of their settings. Several widgets are provided with the application: one for creating new experiments, another for tracking work in progress, and an inbox widget for messages sent through the notebook. This out-of-the-box set can be supplemented by building custom widgets.
This session will show several custom widgets examples to demonstrate the basic concepts of widget development and the API they implement. We will also discuss best practices, and how to make your widget a good citizen of the Home Page.
(ATS6-DEV06) Using Packages for Protocol, Component, and Application DeliveryBIOVIA
Delivering protocols, components, and applications to users and other developers on an AEP server can be very challenging. Accelrys delivers the majority of its AEP services in the form of packages. This talk will discuss the methods that anyone can use to deliver bundled applications in the form of packages and the benefits of doing so. The discussion will include how to create packages, modifying existing packages, deploying packages to servers, and tools that can be used for ensuring the quality of the packages.
(ATS6-DEV05) Building Interactive Web Applications with the Reporting CollectionBIOVIA
The reporting component collection in AEP provides powerful tools for building user interfaces for web applications, while leveraging the breadth of functionality of AEP for data querying and manipulation. This session will explore some of the tools available for creating web applications using the reporting collection.
(ATS6-DEV04) Building Web MashUp applications that include Accelrys Applicati...BIOVIA
One of the biggest challenges in most corporate environments is providing a way for users to access all the data they need, usually stored in multiple disparate locations, from one interface that they are comfortable with. As web applications have become more popular, RESTful APIs have emerged as the preffered web service format in recent years. Many Accelrys applications now provide RESTful APIs that allow developers to build mashup applications. This session will explore some of these APIs and how to use them to build a simple application.
(ATS6-DEV03) Building an Enterprise Web Solution with AEPBIOVIA
In this session, we'll take a deep dive into building an Enterprise Solution with AEP. We'll be using Pipeline Pilot to develop the protocols that will provide our server-side implementations and ExtJS to build the user interface. We'll look at the techniques involved in using protocols to implement actions and explore the capabilities of ExtJS to produce powerful enterprise applications.
AEP provides a range of options for developing web applications. Understanding these options, their strengths and the decision making process involved in choosing the right strategy is key to leveraging the power of the platform and ensuring you achieve your goals and do so on schedule. From simple reporting protocols developed exclusively using Pipeline Pilot through to Rich Internet Applications built using JavaScript and ExtJS, we'll take a look at the work involved, required skillsets and time considerations to ensure you make the right choice for your project.
(ATS6-DEV01) What’s new for Protocol and Component Developers in AEP 9.0BIOVIA
This will focus on new features that are now available to protocol and component developers in the new version of AEP. This will include discussions of improvements to hierarchical data records and XML reading and writing, new parameter subprotocol promotion behavior, new component icons, parameter metadata, easier to access Job Pooling settings, Pilotscript updates, Hashmap improvements, Unicode reading improvements, and other improvements to protocol development.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
2. The information on the roadmap and future software development efforts are
intended to outline general product direction and should not be relied on in making
a purchasing decision.
5. Wednesday, 22 May: Day 1
9:00
10:00
10:00
10:30
10:30
11:15
11:15
12:00
12:00
13:15
13:15 (ATS6-PLAT01) (ATS6-APP01) (ATS6-DEV01)
14:00 Changes and Improvements to Direct 9 and PPChem Deploying Discoverant Across a Global Network What’s New for Developers in AEP 9.0
14:00 (ATS6-PLAT02) (ATS6-APP02) (ATS6-DEV02)
14:45 Accelrys Catalog and
Helping Users Make the Most of their Data with
Discoverant
Web Application Strategies
Protocol Validation
14:45 (ATS6-PLAT03) (ATS6-APP03) (ATS6-DEV03)
15:30
What’s Behind Discngine Collections: the Tibco Spotfire
Connector & Graph Toolbox
Life Sciences Partner Building an Enterprise Web Solution with AEP
Ecosystem Overview
15:30
16:15
16:15 (ATS6-PLAT04) (ATS6-APP04) (ATS6-DEV04)
17:00 Query Service
Flexible Data Capture for Improved Laboratory
Ergonomics
Building Web Mashup Applications Using Accelrys REST
Services
17:00 (ATS6-PLAT05) (ATS6-APP05) (ATS6-DEV05)
17:45 Security Enhancements in AEP 9 Deploying Contur ELN to Large Organizations
Building Interactive Web Applications with the
Reporting Collection
17:45
19:00
19:00
21:00
How BT is Simplifying the Mining of Biological ‘Big Data’ – a CTO’s Perspective
Lunch
Break
Free Time
Group Dinner
Registration & Welcome Coffee
(ATS6-GS01)
Welcome & Introduction to SILM
(ATS6-GS02)
Integrating Contur and HEOS
(ATS6-GS03)
6. Thursday, 23 May: Day 2
Thursday, 23 May: Day 2
9:00
9:45
9:45
10:30
10:30
11:00
11:00 (ATS6-Roadmap02) (ATS6-Roadmap03) (ATS6-Roadmap04)
11:45 LIMS Roadmap ELN Roadmap Discoverant Roadmap
11:45 (ATS6-PLAT06) (ATS6-APP06) (ATS6-DEV06)
12:30 Maximizing AEP Performance Accelrys LIMS and Accelrys ELN Integration Using Packages for Enterprise Application Delivery
12:30
13:45
13:45 (ATS6-PLAT07) (ATS6-APP07) (ATS6-DEV07)
14:30 Managing AEP in an Enterprise Environment Configuration of Accelrys ELN Building Widgets for
to Clone to the Latest ELN Home Page
Template Version
14:30 (ATS6-PLAT08) (ATS6-APP08) (ATS6-DEV08)
15:15 AEP in a Validated Environment ADQM Solution Deployment Integrating Contur ELN
with Other Systems Using a RESTful API
15:15
15:30
15:30 (ATS6-PLAT09) (ATS6-APP09) (ATS6-DEV09)
16:15
Deploying Applications on Load Balanced AEP Servers
for High Availability
ELN Configuration Management with ADM
Deep Dive into REST and SOAP Integration for Protocol
Authors
16:15
Lunch
Break
Closed
(ATS6-Roadmap01)
Platform Roadmap
(ATS6-GS04)
Partner Session -IBM: Performance Analysis of Accelrys Enterprise
Platform 9.0 on IBM’s Scalable IT solution
Break
7. • Focus is on HOW
• Your Questions – Our Answers
• Our Questions – Your Answers
• Please Ask Your Questions – We Are Going to Ask Ours
• Speakers Objectives:
– When the audience walks out of your session, they should have
information that will allow them to improved their ability to perform
their job
Why
8. SILM
PLM
ERP
Ideation Development Validation Manufacturing
Scientific Innovation Lifecycle Management
MES
Scientific Innovation Commercialization
Ideation Development Validation Manufacturing
Life Science
(small molecule drug,
biologic,
pesticide, etc.)
Non-Life Science
(chemical, material,
catalyst, polymer,
formulation, etc.)
9. LEA
Notebook
Vault
Isentris
Other
3rd Party
Vendors
LIMS
ORACLE
Enterprise Lab Management
Workflow And
Automation
Modeling and
Simulation
Data Management
and Informatics
3rd Party, Partner
& ISV Apps
Discovery
Studio
Materials
Studio
Isentris Registration Pipeline Pilot
Scientific and Generic Services Data Management Services
Accelrys Enterprise Platform
ELN LEA
Text & Image
Analytics
Statistics &
Analysis
Reporting Laboratory Ops
Materials Analytical
Instrumentation
BiologyChemistry
Scientific
Modeling
SmartLab
13
An Open Scientifically Enabled Infrastructure
10. Accelrys Enterprise Platform
Accelrys Enterprise Platform
• User Administration
• System Administration
• Security
• Load Balancing
• HPC
IT Services
• Scientific Collections
• Generic Collections
Scientific and Generic Services
• Data Integration
• Data Access
• Cartridge
Data Management Services
• Web Services
• 3rd Party Integration
• SDKs / APIs
Development Services
• Catalog
• Protocol Execution
• Web Port
Application Services
11. The Need for a Platform to Underpin SILM
Accelrys
Customers
Partners
• Access to broad scientific informatics
solutions across different domains
• Integration into existing IT environments
• Lower cost of IT administration
• Rapidly bring new applications
to market
• Create applications that are
consistent and integrated
• Leverage common IT-related
services
• Easily integrate partner technologies
for broader solutions
• Leverage domain expertise of
external partners
12. Modeling and
Simulation
Discovery
Studio
Materials
Studio
Workflow And
Automation
Pipeline Pilot
Accelrys Enterprise Platform
Data Management
and Informatics
Isentris Registration
Enterprise Lab Management
ELN LEA SmartLab
13
Do you have AEP?
• Accelrys Enterprise Platform – or –
Accelrys Enterprise Platform for Application
• Some conditions and criteria, check with your Accelrys account
manager
Accelrys Enterprise Platform