This document summarizes a project by NASA's Goddard Space Flight Center to estimate biomass in the arid and semi-arid regions of sub-Saharan Africa using high-resolution satellite imagery from AWS. The project aims to process over 3,000 scenes of imagery over Niger to generate vegetation indices and estimate carbon storage. It requires over 100 virtual machines running for a month on AWS to process the data. Cycle Computing software will be used to automate resource provisioning and data management on AWS. The goals are to develop methods to scale the analysis to the entire arid and semi-arid regions of sub-Saharan Africa using AWS's flexible computing capacity.
Moving Workloads into AWS GovCloud (US) - AWS Symposium 2014 - Washington D.C. Amazon Web Services
In a 2012 IDC study, researchers found that customers who migrated to AWS broke even in just seven months and experienced a 626% five-year return on investment. Furthermore, public sector customers’ typical migration needs make this even easier, faster, and more cost effective. Learn how to identify the best workloads to move, the logistics of this transition (“lift-and-shift” or a phased approach), and the benefits your organization will experience from day one.
Federal Compliance Deep Dive: FISMA, FedRAMP, and Beyond - AWS Symposium 2014...Amazon Web Services
Security is your number one priority and it is ours too. With customers around the world across all industries, it is our top priority to ensure the underlying cloud infrastructure is secure and compliant. This presentation will address our shared security/responsibility model, specific compliance requirements such as FedRAMP, DISA/DoD Cloud Security Models, and detail the specific AWS compliance programs that supports our customers in these compliance environments.
(ISM206) Modern IT Governance Through Transparency and AutomationAmazon Web Services
As information technology increasingly becomes strategic to more enterprises and government agencies, and as the threat landscape evolves and becomes more challenging, governance, risk management, and compliance (GRC) increasingly become c-suite issues. In this session, we examine how the AWS cloud platform, through APIs and automation, enables advances and the implementation of best practices in governance and compliance. Learn how AWS can help senior leadership confidently answer key governance questions, such as: What do I have? How it is performing? Who controls it? Is it secure and compliant? Are we using the right processes and protections when we make changes? What is it costing me?
AWS Deployment Best Practices - AWS Symposium 2014 - Washington D.C. Amazon Web Services
Description: This session will feature best practices in the real world for deploying AWS cloud services. You will hear about cloud use cases, governance, security, cloud architecture, optimizing costs, and leveraging appropriate support offerings. The session will provide insight into experience from hundreds of government customers’ AWS adoption and highlight lessons learned along the way.
Moving Workloads into AWS GovCloud (US) - AWS Symposium 2014 - Washington D.C. Amazon Web Services
In a 2012 IDC study, researchers found that customers who migrated to AWS broke even in just seven months and experienced a 626% five-year return on investment. Furthermore, public sector customers’ typical migration needs make this even easier, faster, and more cost effective. Learn how to identify the best workloads to move, the logistics of this transition (“lift-and-shift” or a phased approach), and the benefits your organization will experience from day one.
Federal Compliance Deep Dive: FISMA, FedRAMP, and Beyond - AWS Symposium 2014...Amazon Web Services
Security is your number one priority and it is ours too. With customers around the world across all industries, it is our top priority to ensure the underlying cloud infrastructure is secure and compliant. This presentation will address our shared security/responsibility model, specific compliance requirements such as FedRAMP, DISA/DoD Cloud Security Models, and detail the specific AWS compliance programs that supports our customers in these compliance environments.
(ISM206) Modern IT Governance Through Transparency and AutomationAmazon Web Services
As information technology increasingly becomes strategic to more enterprises and government agencies, and as the threat landscape evolves and becomes more challenging, governance, risk management, and compliance (GRC) increasingly become c-suite issues. In this session, we examine how the AWS cloud platform, through APIs and automation, enables advances and the implementation of best practices in governance and compliance. Learn how AWS can help senior leadership confidently answer key governance questions, such as: What do I have? How it is performing? Who controls it? Is it secure and compliant? Are we using the right processes and protections when we make changes? What is it costing me?
AWS Deployment Best Practices - AWS Symposium 2014 - Washington D.C. Amazon Web Services
Description: This session will feature best practices in the real world for deploying AWS cloud services. You will hear about cloud use cases, governance, security, cloud architecture, optimizing costs, and leveraging appropriate support offerings. The session will provide insight into experience from hundreds of government customers’ AWS adoption and highlight lessons learned along the way.
Using the Open Science Data Cloud for Data Science ResearchRobert Grossman
The Open Science Data Cloud is a petabyte scale science cloud for managing, analyzing, and sharing large datasets. We give an overview of the Open Science Data Cloud and how it can be used for data science research.
Time to Science/Time to Results: Transforming Research in the CloudAmazon Web Services
This session demonstrates how cloud can accelerate breakthroughs in scientific research by providing on-demand access to powerful computing. You will gain insight into how scientific researchers are using the cloud to solve complex science, engineering, and business problems that require high bandwidth, low latency networking and very high compute capabilities. You will hear how leveraging the cloud reduces the costs and time to conduct large scale, worldwide collaborative research. Researchers can then access computational power, data storage, and supercomputing resources, and data sharing capabilities in a cost-efficient manner without implementation delays. Disease research can be accomplished in a fraction of the time, and innovative researchers in small schools or distant corners of the world have access to the same computing power as those at major research institutions by leveraging Amazon EC2, Amazon S3, optimizing C3 instances and more to increase collaboration. This session will provide best practices and insight from UC Berkeley AMP Lab on the services used to connect disparate sets of data to drive meaningful new insight and impact.
Advanced Strategies for Leveraging AWS for Disaster Recovery Amazon Web Services
Amazon Web Services (AWS) provides powerful APIs and services that enable AWS to be used for production use cases, including “pay as you go” disaster recovery (DR) in the cloud.
In this presentation you’ll learn about, and see how, CloudVelocity automates processes to leverage these APIs for entire app environments, from the OS to configurations, updates, patches, and even IP addresses. This helps businesses use the AWS Cloud to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site. The webinar will also demonstrate a live migration of a multi-tier app and environment into AWS for DR, and the impact of automation on DR deployment for the City of Asheville, NC.
Tooling Up for Efficiency: DIY Solutions @ Netflix - ABD319 - re:Invent 2017Amazon Web Services
At Netflix, we have traditionally approached cloud efficiency from a human standpoint, whether it be in-person meetings with the largest service teams or manually flipping reservations. Over time, we realized that these manual processes are not scalable as the business continues to grow. Therefore, in the past year, we have focused on building out tools that allow us to make more insightful, data-driven decisions around capacity and efficiency. In this session, we discuss the DIY applications, dashboards, and processes we built to help with capacity and efficiency. We start at the ten thousand foot view to understand the unique business and cloud problems that drove us to create these products, and discuss implementation details, including the challenges encountered along the way. Tools discussed include Picsou, the successor to our AWS billing file cost analyzer; Libra, an easy-to-use reservation conversion application; and cost and efficiency dashboards that relay useful financial context to 50+ engineering teams and managers.
What is Innovation? How can cloud computing help you innovate? How can you make your applications smarter? Predictive? How can you interpret data and anticipate trends? With AWS Artificial Intelligence Solutions: Machine Learning, Rekognition, Polly; with serverless - Lambda, Step Functions.
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Course 3 : Types of data and opportunities by Nikolaos DeligiannisBetacowork
For more info about our Big Data courses, check out our website ➡️ https://www.betacowork.com/big-data/
---------
"Data is the new oil" - Many companies and professionals do not know how to use their data or are not aware of the added value they could gain from it.
It is in response to these problems that the project “Brussels: The Beating Heart of Big Data” was born.
This project, financed by the Region of Brussels Capital and organised by Betacowork, offers 3 training cycles of 10 courses on big data, at both beginner and advanced levels. These 3 cycles will be followed by a Hackathon weekend.
No prerequisites are required to start these courses. The aim of these courses is to familiarize participants with the principles of Big Data.
------
For more info about our Big Data courses, check out our website ➡️ https://www.betacowork.com/big-data/
By 2020, 50% of all new software will process machine-generated data of some sort (Gartner). Historically, machine data use cases have required non-SQL data stores like Splunk, Elasticsearch, or InfluxDB.
Today, new SQL DB architectures rival the non-SQL solutions in ease of use, scalability, cost, and performance. Please join this webinar for a detailed comparison of machine data management approaches.
Businesses are generating more data than ever before.
Doing real time data analytics requires IT infrastructure that often needs to be scaled up quickly and running an on-premise environment in this setting has its limitations.
Organisations often require a massive amount of IT resources to analyse their data and the upfront capital cost can deter them from embarking on these projects.
What’s needed is scalable, agile and secure cloud-based infrastructure at the lowest possible cost so they can spin up servers that support their data analysis projects exactly when they are required. This infrastructure must enable them to create proof-of-concepts quickly and cheaply – to fail fast and move on.
Watch a replay of the webinar: https://www.youtube.com/watch?v=BtzPgLBy56w
451 Research and NuoDB outline the key database criteria for cloud applications. Explore how applications deployed in the cloud require a combination of standard functionality, such as ANSI SQL, and new capabilities specifically required to take full advantage of cloud economics, such as elastic scalability and continuous availability.
Data driven organizations can be challenged to deliver new and growing business intelligence requirements from existing data warehouse platforms, constrained by lack of scalability and performance. The solution for customers is a data warehouse that scales for real-time demands and uses resources in a more optimized and cost-effective manner. Join Snowflake, AWS and Ask.com to learn how Ask.com enhanced BI service levels and decreased expenses while meeting demand to collect, store and analyze over a terabyte of data per day. Snowflake Computing delivers a fast and flexible elastic data warehouse solution that reduces complexity and overhead, built on top of the elasticity, flexibility, and resiliency of AWS.
Join us to learn:
• Learn how Ask.com eliminates data redundancy, and simplifies and accelerates data load, unload, and administration
• Learn how to support new and fluid data consumption patterns with consistently high performance
• Best practices for scaling high data volume on Amazon EC2 and Amazon S3
Who should attend: CIOs, CTOs, CDOs, Directors of IT, IT Administrators, IT Architects, Data Warehouse Developers, Database Administrators, Business Analysts and Data Architects
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
2. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Fastest growing workloads
Fraud Detection
Risk Modeling
Drug Design
Genomics
Modeling and
Simulation
Unstructured Data
Analysis,
Data Lakes
3. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Most resource intensive
1 core 8 cores 8 servers 10–10000 servers
4. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Great, so…
what’s the problem?
5. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
The challenge of fixed capacity
Time
Capability
Internal Capacity
System Organization
6. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Transform/life sciences
The problem in 2013:
• Cancer research needed 50,000 cores,
not available in-house
The options they didn’t choose:
• Buy infrastructure: Spend $2M, wait 6 months
• Write software for 9–12 months this 1 app
Solution:
• Created 10,600 server cluster
• 39.5 years of computing in 8 hours
• Found 3 potential drug candidates!
• Total infrastructure bill: $4,372
6
7. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Cycle powers cloud BigData and BigCompute
Data Workflow
Cloud Orchestration
Analytics
Modeling
Internal
Compute
Compute Burst
Software required to drive
analytics and simulation at
scale:
• Easy access
• Highly automated
• On-demand
• Ask the right questions
8. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Best way to try it… try it
8
to try it…try Tim@cyclecomputing.com
9. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 20159
Measure Woody Biomass on South Side of the
Sahara at the 40–50 cm Scale Using AWS
Overview of the NASA Head in the Clouds Project presented at the Amazon Web Services
Public Summit 2015
Daniel Duffy daniel.q.duffy@nasa.gov and on Twitter @dqduffy
High Performance Computing Lead at the
NASA Center for Climate Simulation (NCCS) – http://www.nccs.nasa.gov and @NASA_NCCS
Goddard Space Flight Center (GSFC) – http://www.nasa.gov/centers/goddard/home/
10. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
ESD Project Won Intel Head in Clouds Challenge Award to
Estimate Biomass in South Sahara
Project Goal
• Using NGA data to estimate tree and bush biomass over the
entire arid and semi-arid zone on the south side of the Sahara
Project Summary
• Estimate carbon stored in trees and bushes in arid and semi-
arid south Sahara
• Establish carbon baseline for later research on expected CO2
uptake on the south side of the Sahara
Principal Investigators
• Dr. Compton J. Tucker, NASA Goddard Space Flight Center
• Dr. Paul Morin, University of Minnesota
Tree
Crown
Shadow
NGA 40 cm imagery representing tree
and shrub automated recognition
11. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Intel
• Professional Services and Funding for AWS Resources
Amazon Web Services (AWS)
• Compute and storage
• Support to set up environment
Cycle Computing
• Cloud Resource Management Software
• Services to install and configure the software
Climate Model Data Services (CDS – GSFC Code 600)
• NGA data support
NASA Center for Climate Simulation (NCCS – GSFC Code 606.2)
• System administration, application support, and data movement
NASA CIO
• General cloud consulting and coordination support
Partners and Resources
12. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Existing Sub-Saharan Arid and Semi-Arid Sub-Meter Commercial Imagery
9600 Strips (~80TB) to Be Delivered to GSFC
~1600 strips (~20TB) at GSFC
Area Of Interest (AOI) for Sub-Saharan Arid and Semi-Arid Africa
13. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
The DigtalGlobe Constellation
The Entire Archive is Licensed to the USG
Geoeye
Quickbird
Ikonos
Worldview 1
Worldview 2
Worldview 3 (Available Q1 2015)
14. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
14
Panchromatic and multispectral mapping
at the 40- and 50-cm scale
15. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Use Niger as the test case
NGA data over Niger
• Currently have about 16,000 total scenes covering Niger (the data is already orthorectified)
• For this test case, approximately 3,120 scenes need to be processed to generate the vegetation index
• Each scene is approximately 30,000 x 30,000 data points (pixels)
• Will break each scene up into 100 tiles (3,000 x 3,000)
Where is the data?
• Data currently resides within the NCCS and in AWS
Additional data
• If we are successful and have additional time and resources, other African areas can be studied.
15NASA Head in the Clouds Project
16. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Processing requirements
Based on the tests run in the NCCS private cloud, following processing requirements were
estimated
• The tests were run on a single core (Intel E5-2670 2.5 GHz processor) virtual machine with 2 GB of
memory
• Each of the 3,120 scenes is broken up into 100 tiles
• Each tile took 24 minutes
• Hence, one scene will then take 24 * 100 = 2,400 minutes of total processor time (about 40 wall
clock hours)
• Tiles and scenes can be run in parallel
• Total scene to process = 312,000
• Total compute hours = 124,800
Target completion time
• 1 month will take between 175 to 200 virtual machines running non-stop
16NASA Head in the Clouds Project
17. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Input and output data
Input data
• Total input of about 8 TB for the 3,120 scenes
• Average of about 2.63 GB of data per scene
• Average of about 26.3 MB of data per tile
Intermediate data products
• Unsure of how much intermediate data products are needed; this will impact the amount of
temporary space required for each run
Output data products
• Total output data is estimated to be 25% of the input data
• Estimated total output is about 2 to 3 TB
• Output data will be transferred back to the NCCS
17NASA Head in the Clouds Project
18. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Cluster configuration requirements
18
Category Description Requirement
Number of Cores How many cores are required on a single node for the
application?
1 per tile
Amount of Memory (RAM) How much memory on a node (or per core) is required for
the application?
2 GB per tile
Operating System (OS) What operating system does the application need? Linux (Centos or debian)
Libraries/Tools/Software What additional libraries, tools, and software are needed to
be installed? Compilers? Commercial software?
None
Parallelization Can the application run in a parallel manner? If so, how
(threaded, MPI, or multiple instances of the application)?
Inherently parallel processing of each scene
and/or tile
Cluster If the application runs in parallel across many nodes, how
many nodes are required?
175 – 200 to complete in 1 month; more can be
used
Storage How much storage space will be required for each run
(input, intermediate, and output files)?
Total Input – 8 TB (approx. 2.6 GB for each
scene)
Intermediate – To be determined
Total Output Back to NCCS – 2 TB ( approx.
25% of total input)
Shared Storage Does this storage have to be shared across all nodes? No
19. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Workflow
19
DataMan
Cycle Computing Data
Transfer Software
NCCS Science Cloud
(Internal Cloud)
Shared File System
NGA Data at NASA
NGA Data External to NASA (PGC, Digital Globe)
Data to be copied into the
NCCS science cloud NGA
data repository.
NCCS/NASA VM
Local
Data
A resource manager (batch queue) will be running in AWS. Scientists
will interact and launch jobs through the Cycle Computing system
directly in AWS.
Virtual machines will be launched in AWS. After the job is completed,
the results will be copied back to the NCCS.
VM
Local
Data
VM
Local
Data
VM
Local
Data
AWS
VM VM VM
Virtual machines in the internal cloud can read the data directly
from the shared disk in the NASA internal cloud. No additional
data movement is required.
Amazon S3
Data to be processed is staged into Amazon S3. Data will be moved
to the local storage of the VM’s for processing. Products could be
stored in S3 for transfer to the NCCS at a later time.
Batch
Queue
System
The Cycle Computing
DataMan software will
be used to transfer the
data into Amazon S3.
Cycle
Computing
System
20. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Time line
20
Category Dec Jan Feb Mar Apr May Jun Jul Aug Sep
Bi-Weekly Tag Ups
Requirements/Scope
Setup/Configuration
Test Runs
Transfer Data to S3
Configure S3 Buckets
Production Runs
Analysis
Final Report
21. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Why use Cycle Computing and AWS?
• The bigger goal is to analyze the entire arid and semi-arid zone on the south side of
the Sahara
– About 80 TB
– 10x the data that the initial project will analyze
• On 200 virtual machines, this will take 10 months!
– How can we accelerate this?
• Can easily scale up the number of virtual machines using the Cycle Computing
software and the AWS resources
– Once the data is in AWS, 80 TB of data can be analyzed in approximately the same amount
of time as 8 TB of data
– Scientists really love this part!
• Might need longer given the data transfers may take time – can overlap data transfers
and computation
21
22. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Thanks goes to the following…
NASA
• Dr. Compton Tucker (Co-PI)
• Katherine Melocik (GSFC)
• Jennifer Small (GSFC)
• Dr. Tsengdar Lee (HQ)
• Daniel Duffy (GSFC)
• Mark McInerney (GSFC)
• Hoot Thompson (GSFC)
• Garrison Vaughn (GSFC)
• Brittany Wills (GSFC)
• Scott Sinno (GSFC)
• Ray Obrien (ARC)
• Richard Schroeder (ARC)
• Milton Checchi (ARC)
University Partners
• Paul Morin (Co-PI, Univ. Minnesota)
• Claire Porter (Univ. Minnesota)
• Jamon Van Den Hoek (Oak Ridge)
22
Cycle Computing
• Tim Carroll
• Michael Requa
• Carl Chesal
• Bob Nordlund
• Glen Otero
• Rob Futrick
AWS
• Jamie Baker
• Jeff Layton
There are others… My apologies for those I
missed. These are typically the ones on the our
conference calls!
23. AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Thank You.
This presentation will be loaded to SlideShare the week following the Symposium.
http://www.slideshare.net/AmazonWebServices
AWS Government, Education, and Nonprofit Symposium
Washington, DC I June 25-26, 2015
Editor's Notes
Key Points:
Multi-billion dollar corps committed to getting better answers faster
(key on the hook)