Cloud Deployment of Data Harmony
Jeffrey Gordon, Lead Developer, Access Innovations, Inc.
Jeffrey will describe the cloud deployment of the Data Harmony software.
Marjorie M. K. Hlava, President, Chair of the Board, and Chief Scientist, Access Innovations, Inc.
During this annual highlight of the DHUG meetings, Margie will discuss the exciting new changes and additions to the Data Harmony software. She will be joined by some members of our software development team to talk about specific initiatives we have worked on over the past year.
eRoom is designed specifically for small to medium-sized businesses in need of a collaboration tool. In this webinar, the topics we included:
• SharePoint Online and Office overview
• Comparison between eRoom and SharePoint Online / Office 365
• Migration methodologies from eRoom to Office 365
• Migration options from eRoom to Office 365
RTTS - the Software Quality Experts
---------------------------------------------------------------------------------
RTTS (www.rttsweb.com) RTTS is the premier pure-play QA & Testing organization
that specializes in Test Automation. Founded in 1996, with locations in
New York (HQ), Atlanta, Philadelphia, Phoenix, RTTS has successfully completed engagements at over 600 companies. RTTS also has alliances with the top vendors in QA and testing, including IBM, Microsoft, HP and Oracle.
---------------------------------------------------------------------------------
Services include:
- Managed Testing Services - in the Cloud or on your premises
- Test Management
- Automated Functional Testing
- Performance/Load Testing
- Data Warehouse/ETL Testing
- Big Data Testing
- Mobile Testing
- Application Security Testing
------------------------------------------------------------------------------------
- Training courses (in the Cloud, at our NY offices, or at your site)
+ Selenium training
+ IBM Rational RPT, RFT, RQM training
+ Appium training
+ Microsoft Visual Studio Load, Coded UI, Test Manager training
+ HP Quality Center/ALM UFT, Loadrunner training
+ Big Data Testing training
+ Data Warehouse Testing training
--------------------------------------------------------------------------------------
RTTS also is the developer of QuerySurge (www.QuerySurge.com), the premier data testing tool
- Data warehouse testing
- ETL testing
- Big Data testing (Hadoop, MongoDB, etc.)
- Data Interface testing (SAP, PeopleSoft, etc.)
- Data Migration testing
- Database Upgrade testing
Modernize and Transform your IT with IBM Storage and Catalogic Copy Data Mana...Catalogic Software
Catalogic Copy Data Management (CDM) modernizes and transforms your IBM Storage infrastructure. Catalogic provides the only integrated CDM solution that lets you:
• Catalog and track copies and VMs across the enterprise
• Automate protection SLAs, copy creation and system provisioning
• Transform IT operations with Hybrid Cloud, DevOps and user self-service
Through operational modernization, Catalogic lets you derive additional value from your IBM storage investment, deliver a more agile IT infrastructure, and improve business productivity. Catalogic transforms your IBM Storwize, SAN Volume Controller (SVC), VersaStack and FlashSystem V9000 environments with a non-disruptive, software-only solution. Join this webinar to learn how Catalogic can help you modernize and transform your IT.
Collab 365 - Real world scenarios to migrate to SharePoint 2016 or Office 365Patrick Guimonet
This is the slides from our session at Collab 365 on SharePoint 2016 and Office 365 migration
Same session we did at SPS Barcelona 2015
With Gokan Ozcifci
Marjorie M. K. Hlava, President, Chair of the Board, and Chief Scientist, Access Innovations, Inc.
During this annual highlight of the DHUG meetings, Margie will discuss the exciting new changes and additions to the Data Harmony software. She will be joined by some members of our software development team to talk about specific initiatives we have worked on over the past year.
eRoom is designed specifically for small to medium-sized businesses in need of a collaboration tool. In this webinar, the topics we included:
• SharePoint Online and Office overview
• Comparison between eRoom and SharePoint Online / Office 365
• Migration methodologies from eRoom to Office 365
• Migration options from eRoom to Office 365
RTTS - the Software Quality Experts
---------------------------------------------------------------------------------
RTTS (www.rttsweb.com) RTTS is the premier pure-play QA & Testing organization
that specializes in Test Automation. Founded in 1996, with locations in
New York (HQ), Atlanta, Philadelphia, Phoenix, RTTS has successfully completed engagements at over 600 companies. RTTS also has alliances with the top vendors in QA and testing, including IBM, Microsoft, HP and Oracle.
---------------------------------------------------------------------------------
Services include:
- Managed Testing Services - in the Cloud or on your premises
- Test Management
- Automated Functional Testing
- Performance/Load Testing
- Data Warehouse/ETL Testing
- Big Data Testing
- Mobile Testing
- Application Security Testing
------------------------------------------------------------------------------------
- Training courses (in the Cloud, at our NY offices, or at your site)
+ Selenium training
+ IBM Rational RPT, RFT, RQM training
+ Appium training
+ Microsoft Visual Studio Load, Coded UI, Test Manager training
+ HP Quality Center/ALM UFT, Loadrunner training
+ Big Data Testing training
+ Data Warehouse Testing training
--------------------------------------------------------------------------------------
RTTS also is the developer of QuerySurge (www.QuerySurge.com), the premier data testing tool
- Data warehouse testing
- ETL testing
- Big Data testing (Hadoop, MongoDB, etc.)
- Data Interface testing (SAP, PeopleSoft, etc.)
- Data Migration testing
- Database Upgrade testing
Modernize and Transform your IT with IBM Storage and Catalogic Copy Data Mana...Catalogic Software
Catalogic Copy Data Management (CDM) modernizes and transforms your IBM Storage infrastructure. Catalogic provides the only integrated CDM solution that lets you:
• Catalog and track copies and VMs across the enterprise
• Automate protection SLAs, copy creation and system provisioning
• Transform IT operations with Hybrid Cloud, DevOps and user self-service
Through operational modernization, Catalogic lets you derive additional value from your IBM storage investment, deliver a more agile IT infrastructure, and improve business productivity. Catalogic transforms your IBM Storwize, SAN Volume Controller (SVC), VersaStack and FlashSystem V9000 environments with a non-disruptive, software-only solution. Join this webinar to learn how Catalogic can help you modernize and transform your IT.
Collab 365 - Real world scenarios to migrate to SharePoint 2016 or Office 365Patrick Guimonet
This is the slides from our session at Collab 365 on SharePoint 2016 and Office 365 migration
Same session we did at SPS Barcelona 2015
With Gokan Ozcifci
Modernize and Transform your IT with NetApp Storage and Catalogic Copy Data M...Catalogic Software
Catalogic Copy Data Management (CDM) modernizes and transforms your NetApp Storage infrastructure. Catalogic provides the only integrated CDM solution that lets you:
• Catalog and track copies and VMs across the enterprise
• Automate protection SLAs, copy creation and system provisioning
• Transform IT operations with Hybrid Cloud, DevOps and user self-service
Through operational modernization, Catalogic lets you derive additional value from your NetApp storage investment, deliver a more agile IT infrastructure, and improve business productivity. Catalogic transforms your NetApp FAS environments with a non-disruptive, software-only solution that also supports NetApp Private Storage and Cloud ONTAP. Join this webinar to learn how Catalogic can help you modernize and transform your IT.
Become an data driven organization through unified metadata using ODPi EgeriaData Con LA
Data Con LA 2020
Description
Learn how ODPi Egeria uses its distributed virtual graph to connect metadata about an enterprise's data and IT services from many different tools and then apply governance across this landscape. In this talk we will describe the principles behind the distributed virtual graph and how different technologies can connect in. We will also cover how the JanusGraph technology can be used to fill in the gaps between the tools to ensure the metadata is linked together.
Speaker
Mandy Chessell, IBM, ODPi TSC Chairperson and ODPi Egeria project chairperson. IBM Distinguished Engineer
Testing the brave new world of saa s applications quest 2018 v1GerieOwen
Testing Software as a Service, (SaaS) requires specialized skills based on its components and function. The major areas of focus for SaaS functional testing include customizations and configurations, integrations, data. Non-functional testing includes performance, security, disaster recovery, scalability, availability and interoperability. The internal network must be tested for bandwidth and secure data transfer. Finally, a post-production test strategy is needed to address application performance monitoring and vendor upgrades.
Since the SaaS software is not developed specifically to meet user-defined requirements, test leads and testers need to focus on the areas where changes in the end-to-end workflow are made. This workshop will provide a framework for testing each component of SaaS applications, planning, coordinating and executing the end-to-end test. We’ll develop hands-on test scenarios for each component, plan a schedule for coordinating the end to end test and develop a plan for regression testing vendor upgrades.
Tips For a Successful Cloud Proof-of-Concept - RightScale Compute 2013RightScale
Speaker: Vijay Tolani - Cloud Solutions Engineer, RightScale
Most enterprises see POC projects as an important step in their path to public, private, or hybrid cloud. RightScale cloud experts will share on-the-ground experience from a range of enterprise cloud POCs, including business and technical best practices. You will learn how to set your POC strategy, choose your POC clouds, navigate technical hurdles, and measure success.
Tech Ed 2006 South East Asia Security And Compliance by Joel OlesonJoel Oleson
200-300 level deck on SharePoint Security with a focus on Authentication vs. Authorization with the authentication models introduced in WSS 3.0, MOSS 2007.
Most organizations are under pressure to speed up the software delivery cycle, whether that’s to respond more quickly to the needs of the business, the needs of your customers or just to keep up with the competition. Unfortunately the database is commonly considered a bottleneck. Without the right processes in place, database change management can slow things down, adding risk, uncertainty, and getting in the way of development and operations working together to deliver. Any organization that wants to fully benefit from a DevOps approach is going to have to overcome some specific challenges presented by the database. This session will teach you how to take DevOps principles and practices and apply them to SQL Server so that you can speed up the database delivery cycle at the same time you protect the information contained within.
The Qa Testing Checklists for Successful Cloud MigrationTestingXperts
Moving to the cloud is a smarter way to get better and faster service at less price. And, this is only possible once all the boxes in the checklists mentioned in this article have been crossed and you follow the steps of each testing area correctly. Testing the objectives/validations and approaches that were mentioned in the above cloud assessment checklist could be quite tough. Our best bet is to work with a team that has done cloud migration testing before, many times.
SharePoint Migration-What you need to knowOliver Wirkus
A migration to SharePoint is not an easy task and requires extensive and thorough planning to ensure success. This session walks you through all the necessary planning activities and provides established best-practices and recommendation to ensure, your migration planning and migration are efficient and successful.
Databases: The Neglected Technology in DevOpsDevOps.com
Much has been written about software delivery in DevOps, with much less focus on the database. However, DevOps can—and should—play an equally critical role in both software and database development. In this ebook, we examine how DevOps can be used for database development and delivery, factors influencing DevOps’ role in database delivery, and some of the technologies designed to help.
Join us for this lively panel discussion!
These slides highlight all the features offered in the standard Mendix Cloud.
Hosted on CloudFoundry the Mendix cloud offers different scaling, resilience and fallback options to all customers.
Version 1.6 - Q1 2018
Correlate Log Data with Business Metrics Like a JediTrevor Parsons
The Logentries and Hosted Graphite integration allows you to connect two of your favorite Ops tools to easily extract important data from your log files, visualize them as metrics, and share them in Hosted Graphite dashboards.
• Integrate your systems to extract the metrics you need, from both your applications and log data.
• Set-up log metric dashboards based on common use cases (e.g. error tracking, performance, app usage).
• Get off the "complexity elevator" of hosting your own in-house logging or graphite solutions.
• Delight your team and organization with valuable metrics and performance insights.
AP Automation for EBS or PeopleSoft with Oracle WebCenterBrian Huff
Improve accuracy and time for your Accounts Payable processes using Oracle WebCenter. This talk describes all of the pieces in the Oracle stack that can help you, and when each one is cost-effective.
We are on the cusp of a new era of application development software: instead of bolting on operations as an after-thought to the software development process, Kubernetes promises to bring development and operations together by design.
Modernize and Transform your IT with NetApp Storage and Catalogic Copy Data M...Catalogic Software
Catalogic Copy Data Management (CDM) modernizes and transforms your NetApp Storage infrastructure. Catalogic provides the only integrated CDM solution that lets you:
• Catalog and track copies and VMs across the enterprise
• Automate protection SLAs, copy creation and system provisioning
• Transform IT operations with Hybrid Cloud, DevOps and user self-service
Through operational modernization, Catalogic lets you derive additional value from your NetApp storage investment, deliver a more agile IT infrastructure, and improve business productivity. Catalogic transforms your NetApp FAS environments with a non-disruptive, software-only solution that also supports NetApp Private Storage and Cloud ONTAP. Join this webinar to learn how Catalogic can help you modernize and transform your IT.
Become an data driven organization through unified metadata using ODPi EgeriaData Con LA
Data Con LA 2020
Description
Learn how ODPi Egeria uses its distributed virtual graph to connect metadata about an enterprise's data and IT services from many different tools and then apply governance across this landscape. In this talk we will describe the principles behind the distributed virtual graph and how different technologies can connect in. We will also cover how the JanusGraph technology can be used to fill in the gaps between the tools to ensure the metadata is linked together.
Speaker
Mandy Chessell, IBM, ODPi TSC Chairperson and ODPi Egeria project chairperson. IBM Distinguished Engineer
Testing the brave new world of saa s applications quest 2018 v1GerieOwen
Testing Software as a Service, (SaaS) requires specialized skills based on its components and function. The major areas of focus for SaaS functional testing include customizations and configurations, integrations, data. Non-functional testing includes performance, security, disaster recovery, scalability, availability and interoperability. The internal network must be tested for bandwidth and secure data transfer. Finally, a post-production test strategy is needed to address application performance monitoring and vendor upgrades.
Since the SaaS software is not developed specifically to meet user-defined requirements, test leads and testers need to focus on the areas where changes in the end-to-end workflow are made. This workshop will provide a framework for testing each component of SaaS applications, planning, coordinating and executing the end-to-end test. We’ll develop hands-on test scenarios for each component, plan a schedule for coordinating the end to end test and develop a plan for regression testing vendor upgrades.
Tips For a Successful Cloud Proof-of-Concept - RightScale Compute 2013RightScale
Speaker: Vijay Tolani - Cloud Solutions Engineer, RightScale
Most enterprises see POC projects as an important step in their path to public, private, or hybrid cloud. RightScale cloud experts will share on-the-ground experience from a range of enterprise cloud POCs, including business and technical best practices. You will learn how to set your POC strategy, choose your POC clouds, navigate technical hurdles, and measure success.
Tech Ed 2006 South East Asia Security And Compliance by Joel OlesonJoel Oleson
200-300 level deck on SharePoint Security with a focus on Authentication vs. Authorization with the authentication models introduced in WSS 3.0, MOSS 2007.
Most organizations are under pressure to speed up the software delivery cycle, whether that’s to respond more quickly to the needs of the business, the needs of your customers or just to keep up with the competition. Unfortunately the database is commonly considered a bottleneck. Without the right processes in place, database change management can slow things down, adding risk, uncertainty, and getting in the way of development and operations working together to deliver. Any organization that wants to fully benefit from a DevOps approach is going to have to overcome some specific challenges presented by the database. This session will teach you how to take DevOps principles and practices and apply them to SQL Server so that you can speed up the database delivery cycle at the same time you protect the information contained within.
The Qa Testing Checklists for Successful Cloud MigrationTestingXperts
Moving to the cloud is a smarter way to get better and faster service at less price. And, this is only possible once all the boxes in the checklists mentioned in this article have been crossed and you follow the steps of each testing area correctly. Testing the objectives/validations and approaches that were mentioned in the above cloud assessment checklist could be quite tough. Our best bet is to work with a team that has done cloud migration testing before, many times.
SharePoint Migration-What you need to knowOliver Wirkus
A migration to SharePoint is not an easy task and requires extensive and thorough planning to ensure success. This session walks you through all the necessary planning activities and provides established best-practices and recommendation to ensure, your migration planning and migration are efficient and successful.
Databases: The Neglected Technology in DevOpsDevOps.com
Much has been written about software delivery in DevOps, with much less focus on the database. However, DevOps can—and should—play an equally critical role in both software and database development. In this ebook, we examine how DevOps can be used for database development and delivery, factors influencing DevOps’ role in database delivery, and some of the technologies designed to help.
Join us for this lively panel discussion!
These slides highlight all the features offered in the standard Mendix Cloud.
Hosted on CloudFoundry the Mendix cloud offers different scaling, resilience and fallback options to all customers.
Version 1.6 - Q1 2018
Correlate Log Data with Business Metrics Like a JediTrevor Parsons
The Logentries and Hosted Graphite integration allows you to connect two of your favorite Ops tools to easily extract important data from your log files, visualize them as metrics, and share them in Hosted Graphite dashboards.
• Integrate your systems to extract the metrics you need, from both your applications and log data.
• Set-up log metric dashboards based on common use cases (e.g. error tracking, performance, app usage).
• Get off the "complexity elevator" of hosting your own in-house logging or graphite solutions.
• Delight your team and organization with valuable metrics and performance insights.
AP Automation for EBS or PeopleSoft with Oracle WebCenterBrian Huff
Improve accuracy and time for your Accounts Payable processes using Oracle WebCenter. This talk describes all of the pieces in the Oracle stack that can help you, and when each one is cost-effective.
We are on the cusp of a new era of application development software: instead of bolting on operations as an after-thought to the software development process, Kubernetes promises to bring development and operations together by design.
This presentation was made as closing session for Container Conference 2018 on 03rd August in Bangalore by Anoop Kumar from Docker.
"In this session we will get familiarized with the technical aspects of the Docker EE 2.0 Platform. It will involve a walkthrough of the swarm as well as the relatively newly introduced Kubernetes integrations, how it enables organizational agility, choice and security and the future roadmap of the product suite. We'll finally do a quick demo of the platform and close with a Q&A section."
Agenda
1. The changing landscape of IT Infrastructure
2. Containers - An introduction
3. Container management systems
4. Kubernetes
5. Containers and DevOps
6. Future of Infrastructure Mgmt
About the talk
In this talk, you will get a review of the components & the benefits of Container technologies - Docker & Kubernetes. The talk focuses on making the solution platform-independent. It gives an insight into Docker and Kubernetes for consistent and reliable Deployment. We talk about how the containers fit and improve your DevOps ecosystem and how to get started with containerization. Learn new deployment approach to effectively use your infrastructure resources to minimize the overall cost.
How to build "AutoScale and AutoHeal" systems using DevOps practices by using modern technologies.
A complete build pipeline and the process of architecting a nearly unbreakable system were part of the presentation.
These slides were presented at 2018 DevOps conference in Singapore. http://claridenglobal.com/conference/devops-sg-2018/
Modern big data and machine learning in the era of cloud, docker and kubernetesSlim Baltagi
There is a major shift in web and mobile application architecture from the ‘old-school’ one to a modern ‘micro-services’ architecture based on containers. Kubernetes has been quite successful in managing those containers and running them in distributed computing environments.
Now enabling Big Data and Machine Learning on Kubernetes will allow IT organizations to standardize on the same Kubernetes infrastructure. This will propel adoption and reduce costs.
Kubeflow is an open source framework dedicated to making it easy to use the machine learning tool of your choice and deploy your ML applications at scale on Kubernetes. Kubeflow is becoming an industry standard as well!
Both Kubernetes and Kubeflow will enable IT organizations to focus more effort on applications rather than infrastructure.
Docker & aPaaS: Enterprise Innovation and Trends for 2015WaveMaker, Inc.
WaveMaker Webinar: Cloud-based App Development and Docker: Trends to watch out for in 2015 - http://www.wavemaker.com/news/webinar-cloud-app-development-and-docker-trends/
CIOs, IT planners and developers at a growing number of organizations are taking advantage of the simplicity and productivity benefits of cloud application development. With Docker technology, cloud-based app development or aPaaS (Application Platform as a Service) is only becoming more disruptive − forcing organizations to rethink how they handle innovation, time-to-market pressures, and IT workloads.
Visualpath provides the Best Docker Online Training by real-time faculty from Top MNCs. We are providing Certified Kubernetes Security Online Training in Ameerpet as well as the USA, UK, Canada, Dubai, and Australia. You can schedule a free demo by contacting us at +91-9989971070.
Visit Blog: https://visualpathblogs.com/
WhatsApp: https://www.whatsapp.com/catalog/917032290546/
Visit: https://www.visualpath.in/DevOps-docker-kubernetes-training.html
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
Making AI Behave: Using Knowledge Domains to Produce Useful, Trustworthy ResultsAccess Innovations, Inc.
In today's highly charged atmosphere of anxiety and anticipation about AI, and especially LLMs,
one of the biggest concerns is how to ensure that it returns accurate results (meaning both true
and pertinent to its audience). This is particularly important to scholarly, scientific, and other
technical organizations, whose constituents are often in very specific domains, such as
medicine, engineering, history, biology, chemistry, etc. One extremely useful tool to incorporate in an AI-based process in such cases is a comprehensive and well-structured knowledge domain which is based on a controlled vocabulary.
Smart Submit and Client Support
Michael Millar, Junior Software Developer, and Frank Coates, Client Support Manager
Get a peek at the new and improved Smart Submit and learn about new, easier ways to contact the support team at Access Innovations.
How a Good Taxonomy Can Provide Valuable Business Insights
Kristen Monahan, Public Library of Science (PLOS)
Kristen is a business analyst and she won’t be talking about the PLOS taxonomy but rather how she uses that taxonomy to drill down into the massive amount of content, metadata, and usage and process data that is PLOS for deep, detailed analysis and to drive business decisions. Much of this work involves trend analysis. For example, trend analysis of submissions can look at the time it takes from submission to decision by subject (narrow subjects like Covid, broad subjects like biotechnology), or by institution, or by country, etc. to see not just the overall big picture but where in their submission and peer review workflows the bottlenecks might be. A trend analysis of topics over time can prompt them to issue a call for papers for a topic they think needs to be better covered–and then look at both short-term and long-term trends resulting from that call to papers. Their taxonomy doesn’t just make their content smarter–it makes how they publish that content smarter too.
Editor and Peer Reviewer Assignments Using Data Harmony
Andrew Smeall, Hindawi Publishing
Andrew will show how Hindawi, an open access publisher, applies their taxonomy to make editor and reviewer assignments for incoming submissions to their journals.
Access Innovations and Atypon: Beyond Content Tagging
Hong Zhou and Gerasimos Razis, Atypon
Gerasimos and Hong will discuss the changes to the Atypon platform since DHUG 2020.
Getting to the Point: Using AI and Taxonomies to Craft Meta -Titles
Travis Hicks, American Society of Clinical Oncology (ASCO)
Looking to better leverage SEO and include key terms in the url construct for research abstracts, ASCO is working with Access Innovations to evaluate how to programmatically create short titles for abstracts. The idea is to index titles against existing taxonomies as a way of producing a short title that succinctly identified what an abstract is about for purposes of constructing a new url configuration. Travis will discuss the need, challenges, and early results of the project.
Expanding the Use of MAIstro at ASCE
Xi Van Fleet, American Society for Civil Engineers
Using MAIstro, ASCE created the subject/topic taxonomies for their publications to enhance content discovery and business insight. After achieving their primary goal, they have been expanding its use for other applications.
Lessons Learned From Building a Taxonomy and Indexing 140 Years of Content
Michael Darr, Project Manager, D33 – American Chemical Society Pubs IT
Michael will talk about the things they would do differently if they were to build a new taxonomy and index a legacy file, and the things they did right the first time.
Bill’s talk is entitled “WHAT’S IN A NAME? How Kew helps drug regulators disambiguate the messy welter of medicinal plant names to shore up regulation and save lives”. It’s really eye-opening to realize how complicated and imprecise names can get, with multiple scientific, pharmaceutical and popular names for the same thing or with one name used for completely different things.
This has real-world consequences. For example, the EU mistakenly banned a useful plant we use every day when intending to ban a poisonous one because of a naming problem. How Kew is using semantic and taxonomic tools and technologies to bring order to this complexity (I almost said chaos) is really fascinating. They’re also helping to disambiguate nomenclature and provide links to authoritative information for botanical terms for use in journal articles, among other things.
Daniel Vasicek discuss the processes undertaken to OCR various kinds of content from the University of Florida Special Collections to make them machine readable for indexing.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
2. 2
M e e t i n g t h e g r o w i n g d e m a n d s f o r
d a t a a n a l y s i s
2
• What is the purpose of Data Harmony Cloud?
• To help meet the growing needs of our customers to index and process
increasingly large amounts of data
• Allow our customers to easily deploy scalable, responsive, and self-managing
services utilizing Data Harmony
• How do we achieve this?
• Centralized project management
• GIT driven project storage and version control
• Docker containers
• For ease of deployment and project loading
• Kubernetes clusters
• Highly scalable and self-managing clusters capable of mostly
administrating itself
DATA HARMONY CLOUD
3. 3
S i m p l i f y i n g t h e p r o j e c t m a n a g e m e n t
w o r k f l o w
3
• Utilizing GIT style repositories for project storage and versioning
• BitBucket, GitHub, etc…
• What are the benefits of this approach?
• Allows for project development to be more agile
• Thesaurus features can be developed in sprints
• Development versions of the thesaurus can be deployed for
testing before deployment to production
• Thesauri can be versioned and branched for different portions
of the application
• Project management is greatly simplified at all levels
• Native integration with Jira, Trello, etc…
CENTRALIZED GIT PROJECT STORE
4. 4
I n s t a n t D a t a H a r m o n y D e p l o y m e n t s
4
• Data Harmony is can now scale alongside our client’s
infrastructure
• Implementing Platform as a Service alongside our Software as a
Service offerings
• The amount of data being processed is scaling at a dramatic pace
• Our cloud implementation allows us to scale alongside our
customers seamlessly, through Docker containers
• Allow for development around Data Harmony to be streamlined
and integrate with agile practices
• Resources are available as needed
• Create production and staging environments when updating
service versions or implementing thesaurus changes
DOCKER CONTAINERS
5. DOCKER
BASICS
Image
The basis of a Docker container. The content atrest.
Container
The image when it is ‘running.’ The standard unit for app service
Engine
The software that executes commands for containers. Networking and volumes are part of
Engine. Can be clustered together.
Registry
Stores, distributes and manages Dockerimages
Control Plane
Management plane for container and clusterorchestration
6. • Kubernetes is an open-source system for automating deployment,
scaling, and management of containerized applications.
• Improves reliability
- Continuously monitors and manages your containers
- Will scale your application to handle changes in load
• Better use of infrastructure resources
- Helps reduce infrastructure requirements by gracefully scaling up
and down your entire platform via autoscaling
• Coordinates what containers run where and when across your system
• How do all the different types of containers in a system talk to each
other?
• Easily coordinate deployments of your system
- Which containers/projects need to be deployed
- Where should the containers be deployed
o Mars, AWS, Azure, SWCP, …
WHAT DOES KUBERNETES DO?
7. THE POD IS THE CORE KUBERNETES
COMPONENT
• The Pod is the core component of Kubernetes
• Collection of 1 or more containers
• Each pod should focus on one container, however sidecar containers
can be added to enhance features of the core container
spec:
template:
spec:
containers:
- name: accessinnovations/dataharmonyrepo
APIKEY: 384c1bb9-b539-420e-a647-a368e86b47b7
8. PODS CAN HANDLE SCALING AND
DEPLOYMENTS
• Once Kubernetes understands what is in a pod, multiple
management features are available:
• System Performance
- Scale up/down the number of pods based on CPU load or
other criteria
• System Monitoring
- Probes to check the health of each pod
- Any unhealthy ones get killed and new pod is put into service
• Deployments
- Deploy new versions of the container
- Control traffic to the new pods to test the new version
o Blue/Green deployments
o Rolling deployments
9. 9
C o n t a i n e d H e r e i n
9
• Deployment Workflows
• Deploying, Upgrading, and maintaining
• Centralization of projects in BitBucket
• Porting Projects, Production/Development
• Agile platform development
• On-demand production environments
• Self-managing service with scaling logic provided
• Scalability of Data Harmony
• Cloud scalability
• AWS, Azure, Google, and On-Premise
HOW DOES THIS BENEFIT YOU
10. 10
Deploying, Upgrading, and maintaining
10
• Seamless service deployments and upgrades
• Server upgrades are facilitated through the Docker service
• Reduces or eliminates any need for downtimes
• Project migration is handled automatically through the container’s
logic
• All server/project relationships are managed by the API keys
• Utilizing Docker to automate the deployment reduces deployment
time
• Rollover time to new versions or deployments reduced to
seconds
• Maintaining the service is handled automatically through the
container logic
DEPLOYMENT WORKFLOWS
11. 11
Centralization of projects in BitBucket
11
• Projects are now maintained through centralized repositories
• BitBucket, GitHub, etc…
• Simplifies how projects are stored and tracked
• Data Harmony projects are now centralized in BitBucket
• Makes deploying projects through Docker extremely easy
• In fact, this process is automated through the deployment
process
• For new pipelines, workflows, or applications
• Single command line allows for immediate deployment
• Projects are now strictly versioned and branched
• Allows for app dependent version of thesauri
DEPLOYMENT WORKFLOWS
12. 12
Porting Projects, Production/Development
12
• Simplified and efficient deployment/updates of your service
• Improved development/production cycle for your projects
• Implementing BitBucket and version control into the workflow
allows for a streamlined experience when working with
development versions of your thesauri.
• Centralizing the projects in BitBucket alleviates this issue
• Development projects are clones of a project on different a
different branch
• When all changes are finished, the changes can
merged/pushed to production
• Production version of the project on BitBucket can be
seamlessly pushed the containers
HOW YOUR PROJECT IS DEPLOYED
13. 13
On-demand production environments
13
• Project deployment, upgrading, or migration is now simplified to
allow for swift platform changes
• DevOP’s, IT, and management have complete control over Data
Harmony services are deployed for resources that utilize it
• Allows for agile platform deployment
• Infrastructure planning minimized
• Staging environments for updates
• Development teams have access to deploy identical or development
versions of their Data Harmony stack
• Moving from staging to live is also streamlined once the system is
deemed ready for production
AGILE PLATFORM DEVELOPMENT
14. 14
Self monitoring and disaster recovery
14
• Utilizing built in monitoring tools in our container, the Kubernetes
cluster or container can self-govern and heal itself
• With little to no intervention from DevOp’s the Data Harmony service
is capable of monitoring itself and taking steps to prevent service
interruptions.
• The logic for governing these behaviors is provided
• In the case of disaster on any infrastructure
• On boot of the server, the docker service will restore all known
containers of Data Harmony currently running
• Downtime reduced from potentially a day to a matter of minutes,
mitigating the worst-case scenario
SELF-MANAGING SERVICE
15. 15
AWS, Azure, Google, and On-Premise
15
• Whether your use-case is to use on premise hardware, lease
hardware, or deploy in the cloud
• Data Harmony Cloud is platform agnostic and can be deployed
anywhere whether it’s AWS, Azure, Google Cloud, your own private
cloud, or a standard server.
• Combining the power of Docker and Kubernetes, the service can
be scaled to meet any indexing need
• Services can be configured to scale “infinitely” or finitely depending
on the use-case
• Kubernetes handles day-to-day management of the service
• Monitoring, roll-over, and automatic container deployment
depending on current system load
SCALABILITY OF DATA HARMONY
17. 17
17
• When will this be ready?
• Data Harmony Cloud is available now
• How can I try it out
• Ask us for a trial!
• How complex is this to work with?
• For most people, just a single command line!
CONCLUSION