This document summarizes a presentation about using Node.js for CPU-intensive applications like simulations and artificial intelligence. It describes a simulation called GOD that models plants and animals using an entity component system architecture. Tests show the simulation scaling to handle over 250 entities and 15 agents on a quad-core CPU, with the CPU nearly fully utilized. While Node.js and its asynchronous model proved a good fit, constraints like message passing bandwidth and frame rates limit scaling for agent computation. The conclusion is that Node.js can be effectively used for these kinds of applications when the computation and event loads are balanced.
Laporan observasi tipe pola asuh orang tua ketika anak bermainaprilia putri
Tugas Observasi Fakultas Psikologi UBAYA angkatan 2015 Semester 3
Tugas ini dapat membantu adik-adik untuk mengerjakan tugas observasinya :) tapi dilarang copy paste ya adik-adik :) terimakasih
Laporan observasi tipe pola asuh orang tua ketika anak bermainaprilia putri
Tugas Observasi Fakultas Psikologi UBAYA angkatan 2015 Semester 3
Tugas ini dapat membantu adik-adik untuk mengerjakan tugas observasinya :) tapi dilarang copy paste ya adik-adik :) terimakasih
This presentation describes a intelligent IT monitoring solution that uses Nagios as source of information, Esper as the CEP engine and a PCA algorithm.
Kicking off the first in a series of global GPU Technology Conferences, NVIDIA co-founder and CEO Jen-Hsun Huang today at GTC China unveiled technology that will accelerate the deep learning revolution that is sweeping across industries. Huang spoke in front of a crowd of more than 2,500 scientists, engineers, entrepreneurs and press, gathered in Beijing for a day devoted to deep learning and AI. On stage he announced the Tesla P4 and P40 GPU accelerators for inferencing production workloads for AI services and, a small, energy-efficient AI supercomputer for highway driving — the NVIDIA DRIVE PX 2 for AutoCruise.
NVIDIA CEO Jen-Hsun Huang introduces NVLink and shares a roadmap of the GPU. Primary topics also include an introduction of the GeForce GTX Titan Z, CUDA for machine learning, and Iray VCA.
Simplifying AI Infrastructure: Lessons in Scaling on DGX SystemsRenee Yao
Simplifying AI Infrastructure: Lessons in Scaling on DGX Systems, the world's most powerful AI Systems. This is a presentation I did at GTC Israel in 2018
infoShare AI Roadshow 2018 - Tomasz Kopacz (Microsoft) - jakie możliwości daj...Infoshare
Podczas tej sesji przyjrzymy się, w jaki sposób można skorzystać z platformy Microsoft do budowy tzw. „inteligentnych” rozwiązań. W przykładach zobaczymy zarówno Cognitive Services, jak i wykorzystaniu GPU (a dokładniej – Batch AI) do uczenia sieci neuronowych. Zajmiemy się także skomplikowanym zagadnieniami związanymi z projektowaniem – tak by algorytmy rozszerzały ludzkie możliwości (a nie nas zastępowały). Sesja zakłada że słuchacze umieją programować.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2016-member-meeting-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Bünger, Vice President of Research at Lux Research, delivers the presentation "Imaging + AI: Opportunities Inside the Car and Beyond" at the December 2016 Embedded Vision Alliance Member Meeting. Bünger presents his firm’s perspective on how embedded vision will upend the automotive industry.
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
Scalable Data Science and Deep Learning with H2O
GOTO Conference Chicago May 12 2015
http://gotocon.com/chicago-2015/speaker/Arno+Candel
H2O is fast scalable open-source machine learning and deep learning for Smarter Applications. Using in-memory compression techniques, H2O can handle billions of data rows in-memory — even on small compute clusters. The platform includes interfaces for R, Python, Scala, Java, JS and JSON, along with its interactive graphical Flow interface that make it easier for non-engineers to stitch together complete analytic workflows. H2O was built alongside (and on top of) both Hadoop and Spark clusters and is deployed within minutes. Sparkling Water combines the flexibility of Spark with the speed and accuracy of H2O's Machine Learning solution.
In this talk, we explain H2O's scalable in-memory architecture and design principles and outline the implementation of distributed machine learning algorithms such as Elastic Net, Random Forest, Gradient Boosting and Deep Learning. We will present a broad range of use cases and live demos that include world-record deep learning models, anomaly detection tools and approaches for Kaggle data science competitions. We also demonstrate the applicability of H2O in enterprise environments for real-world customer production use cases. We will cover data ingest, feature engineering, model tuning, model validation and model selection; and how to take models into production. Live demos will be run on distributed systems. By the end of this presentation, you will know how to create your own machine learning models on your data using R, Python (iPython Notebooks) or Flow.
Bio:
Arno is the Chief Architect of H2O, a distributed and scalable open-source machine learning platform. He is also the main author of H2O's Deep Learning. Before joining H2O, Arno was a founding Senior MTS at Skytree where he designed and implemented high-performance machine learning algorithms. He has over a decade of experience in HPC with C++/MPI and had access to the world’s largest supercomputers as a Staff Scientist at SLAC National Accelera
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
Microservices: 5 things I wish I'd known - Vincent Kok - Codemotion Amsterdam...Codemotion
Microservices are hot! A lot of companies are experimenting with this architectural pattern that greatly benefits the software development process. When adopting new patterns we always encounter that moment where we think 'if only I knew this three months ago'. This talk will be a sneak peak into the world of microservices at Atlassian and reveal what we've learned about microservices: how to arrange, configure and build your code efficiently; deployment and testing; and how to operate effectively in this environment. In this talk you will learn how to immediately apply five simple strategies.
Microservices 5 things i wish i'd known code motionVincent Kok
Microservices are hot! A lot of companies are experimenting with this architectural pattern that greatly benefits the software development process. When adopting new patterns we always encounter that moment where we think 'if only I knew this three months ago'. This talk will be a sneak peak into the world of microservices at Atlassian and reveal what we've learned about microservices: how to arrange, configure and build your code efficiently; deployment and testing; and how to operate effectively in this environment. In this talk you will learn how to immediately apply five simple strategies.
Visual, Interactive, Predictive Analytics for Big DataArimo, Inc.
Adatao Demo at the First Apache Spark Summit, Nikko Hotel, San Francisco, December 2, 2013
A real-time, live demo of the Adatao big-data analytics system for both Business Users and Data Scientists/Engineers. We showed terabyte data modeling in seconds on a 40-node cluster. And with a beautiful, user-friendly web app, as well as R/R-Studio & Python interfaces.
Well received presentation by Docker Bangalore Community members on Docker , IoT & Amazon Rekognition System. Presented to 400+ audience with a demo around Pico.
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
This presentation describes a intelligent IT monitoring solution that uses Nagios as source of information, Esper as the CEP engine and a PCA algorithm.
Kicking off the first in a series of global GPU Technology Conferences, NVIDIA co-founder and CEO Jen-Hsun Huang today at GTC China unveiled technology that will accelerate the deep learning revolution that is sweeping across industries. Huang spoke in front of a crowd of more than 2,500 scientists, engineers, entrepreneurs and press, gathered in Beijing for a day devoted to deep learning and AI. On stage he announced the Tesla P4 and P40 GPU accelerators for inferencing production workloads for AI services and, a small, energy-efficient AI supercomputer for highway driving — the NVIDIA DRIVE PX 2 for AutoCruise.
NVIDIA CEO Jen-Hsun Huang introduces NVLink and shares a roadmap of the GPU. Primary topics also include an introduction of the GeForce GTX Titan Z, CUDA for machine learning, and Iray VCA.
Simplifying AI Infrastructure: Lessons in Scaling on DGX SystemsRenee Yao
Simplifying AI Infrastructure: Lessons in Scaling on DGX Systems, the world's most powerful AI Systems. This is a presentation I did at GTC Israel in 2018
infoShare AI Roadshow 2018 - Tomasz Kopacz (Microsoft) - jakie możliwości daj...Infoshare
Podczas tej sesji przyjrzymy się, w jaki sposób można skorzystać z platformy Microsoft do budowy tzw. „inteligentnych” rozwiązań. W przykładach zobaczymy zarówno Cognitive Services, jak i wykorzystaniu GPU (a dokładniej – Batch AI) do uczenia sieci neuronowych. Zajmiemy się także skomplikowanym zagadnieniami związanymi z projektowaniem – tak by algorytmy rozszerzały ludzkie możliwości (a nie nas zastępowały). Sesja zakłada że słuchacze umieją programować.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/dec-2016-member-meeting-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Bünger, Vice President of Research at Lux Research, delivers the presentation "Imaging + AI: Opportunities Inside the Car and Beyond" at the December 2016 Embedded Vision Alliance Member Meeting. Bünger presents his firm’s perspective on how embedded vision will upend the automotive industry.
Harnessing the virtual realm for successful real world artificial intelligenceAlison B. Lowndes
Artificial Intelligence is impacting all areas of society, from healthcare and transportation to smart cities and energy. How NVIDIA invests both in internal pure research and accelerated computation to enable its diverse customer base, across gaming & extended reality, graphics, AI, robotics, simulation, high performance scientific computing, healthcare & more. You will be introduced to the GPU computing platform & shown real world successfully deployed applications as well as a glimpse into the current state of the art across academia, enterprise and startups.
Scalable Data Science and Deep Learning with H2O
GOTO Conference Chicago May 12 2015
http://gotocon.com/chicago-2015/speaker/Arno+Candel
H2O is fast scalable open-source machine learning and deep learning for Smarter Applications. Using in-memory compression techniques, H2O can handle billions of data rows in-memory — even on small compute clusters. The platform includes interfaces for R, Python, Scala, Java, JS and JSON, along with its interactive graphical Flow interface that make it easier for non-engineers to stitch together complete analytic workflows. H2O was built alongside (and on top of) both Hadoop and Spark clusters and is deployed within minutes. Sparkling Water combines the flexibility of Spark with the speed and accuracy of H2O's Machine Learning solution.
In this talk, we explain H2O's scalable in-memory architecture and design principles and outline the implementation of distributed machine learning algorithms such as Elastic Net, Random Forest, Gradient Boosting and Deep Learning. We will present a broad range of use cases and live demos that include world-record deep learning models, anomaly detection tools and approaches for Kaggle data science competitions. We also demonstrate the applicability of H2O in enterprise environments for real-world customer production use cases. We will cover data ingest, feature engineering, model tuning, model validation and model selection; and how to take models into production. Live demos will be run on distributed systems. By the end of this presentation, you will know how to create your own machine learning models on your data using R, Python (iPython Notebooks) or Flow.
Bio:
Arno is the Chief Architect of H2O, a distributed and scalable open-source machine learning platform. He is also the main author of H2O's Deep Learning. Before joining H2O, Arno was a founding Senior MTS at Skytree where he designed and implemented high-performance machine learning algorithms. He has over a decade of experience in HPC with C++/MPI and had access to the world’s largest supercomputers as a Staff Scientist at SLAC National Accelera
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
Microservices: 5 things I wish I'd known - Vincent Kok - Codemotion Amsterdam...Codemotion
Microservices are hot! A lot of companies are experimenting with this architectural pattern that greatly benefits the software development process. When adopting new patterns we always encounter that moment where we think 'if only I knew this three months ago'. This talk will be a sneak peak into the world of microservices at Atlassian and reveal what we've learned about microservices: how to arrange, configure and build your code efficiently; deployment and testing; and how to operate effectively in this environment. In this talk you will learn how to immediately apply five simple strategies.
Microservices 5 things i wish i'd known code motionVincent Kok
Microservices are hot! A lot of companies are experimenting with this architectural pattern that greatly benefits the software development process. When adopting new patterns we always encounter that moment where we think 'if only I knew this three months ago'. This talk will be a sneak peak into the world of microservices at Atlassian and reveal what we've learned about microservices: how to arrange, configure and build your code efficiently; deployment and testing; and how to operate effectively in this environment. In this talk you will learn how to immediately apply five simple strategies.
Visual, Interactive, Predictive Analytics for Big DataArimo, Inc.
Adatao Demo at the First Apache Spark Summit, Nikko Hotel, San Francisco, December 2, 2013
A real-time, live demo of the Adatao big-data analytics system for both Business Users and Data Scientists/Engineers. We showed terabyte data modeling in seconds on a 40-node cluster. And with a beautiful, user-friendly web app, as well as R/R-Studio & Python interfaces.
Well received presentation by Docker Bangalore Community members on Docker , IoT & Amazon Rekognition System. Presented to 400+ audience with a demo around Pico.
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
2. WHO WE ARE
Diego Ferri
@thundo
Andrea Ghidini
@ghidosoft
2 of 3 co-founders of Looptribe
Node.js, PHP, .NET
www.looptribe.com Analysis | Consulting | Development
3. BACK TO SCHOOL
“Node.js® is a JavaScript runtime built on Chrome's V8
JavaScript engine. Node.js uses an event-driven, non-
blocking I/O model that makes it lightweight and efficient.”
5. ...BUT WHAT ABOUT CPU
INTENSIVE APPLICATIONS?
Bitcoin miner?
Self-driving car?
MMOG simulation
backend?
6.
7. WHAT WE WILL SEE
FEASIBILITY PROS
BENCHMARKS CONSTRAINTS
8. HERE COMES GOD
Game simulation
Multiple clever agents
The fraction of
all people with
our kind of
experiences
that are living
in a simulation
is very close to
one
[Nick Bostrom]
9. GOD: WHAT IS IT?
Two kinds of entities
PLANTS and ANIMALS
Plants
Static
Food resources
Animals
Explore
Eat nearby plants
Attack other
animals
18. AGENT: RULE-BASED
Set of hard-coded rules
Inference system
Output depends on the selected
rule
Features:
Simple
Acceptable agent behavior
Fast prototyping
30. RESULTS
We are running a simulation
with 250 entities plus 15 agents
The CPU is nearly at full utilization
CONSTRAINTS
Message bus bandwidth
The framerate too high for agent computation
32. CONCLUSIONS
We were supported by the strengths of the
language and its wide ecosystem (e.g. neural
network libraries)
We exploited an event-driven design while
taxing the CPU with work
In fact, the sim spends more time doing CPU
work than I/O
The trick is to find the sweet spot between
the computation time and the event rate
(avoid starvation!)
Yes, Node.js was
a good choice for
this kind of
project
In the end, nothing revolutionary but yes, you can do it with Node.js.
33. CREATIVE IDEAS FOR THE
AUDIENCE
Multimedia processing, e.g. worker pools
Gaming, e.g. huge world sharding
Machine learning clusters
Business intelligence analysis
[D]
Hi all, we’re Diego and Andrea, co-founders of Looptribe. Actually we are only ⅔ of the founders, the missing third is stalking around here.
Our company focuses on analysis, consulting and development.
We work multistack on node/php/.net, mostly web and mobile.
[A]
Well, we are at a NodejsConf so we really hope that we are wasting [weisting] our time with this slide.
This is Node definition taken directly from the homepage.
[A]
What really interests us are the main features:
it is single-threaded
it has asynchronous I/O
it is event driven
it’s fast and it’s V8 powered
So what are the recommended use cases?
Do you have lots of concurrent I/O bound request?
Do you need a scalable network applications?
Yeah, node.js is the right tool.
[D]
BUT, what about “non-standard” CPU intensive applications?
Like
a bitcoin miner? a self driving car?
or a Massively Multiplayer Online Game simulation backend?
can it still be a good choice? Can it be a good framework?
[D]
OR is this what you expect?
So...has Node really enough “POWER” for this kind of scenario [senàrio]?
[A]
Starting from this question [quèsçion] we will see:
if it is feasible to use node
if yes, if there are any pros
we will then try to get a few quantitative measures
and potential constraints
[A]
To work on these points we started a pet project, we humbly nicknamed God.
The true motif behind this talk is in effect that it’s been a while since we wanted to play with a simulation while at the same time exploring modern machine learning techniques in the form of multiple intelligent agents.
[A]
God is a simulated world with two kinds of entities: plants and animals.
Plants are static entities, they cannot move, they slowly grow and reproduce.
Animals on the other hand can move freely around the world, exploring it. They must eat plants in order to be healthy and they can attack each other to compete over resource scarcity.
X[D]
Where did we start from? From the beloved monolithic…
[D]
Here is an overall view of monolith architecture.
The simulation evolves thanks to a design pattern commonly seen in game programming, the popular game loop.
In each turn of the loop events are handled, action taken and the game state updated, while ensuring framerate and timestepping.
Entities are the lead simulation actors. To structure entity management [màn-agement] we adhered to another common game architectural pattern, ECS that stands for Entity-Component-System. [compònent]
What is it?
Entities can be seen as just collection of components and components are just data (e.g. the sight component holds the values for that entity sight angle and radius).
Systems on the other side work on those components. A system is where the expected behavior or rules of the simulation happen to be.
For example is where attacking or moving of an entity really take place. A single system may work on one or more components.
[D]
Here you find the list of components we actually use in God.
All components can be added or removed at runtime.
A few components are present in all entities, others are for specific entities. e.g. all entities have a physics component, but only a plant entity has a plant component (with data about growth, radius).
A few are added only at times. An animal entity may have a movement component only when they intend to move, not otherwise.
On the slide we see a snippet of the Plant component: compact and expressive.
[D]
Systems determine [detérmine] agent behaviors and actions, like combat, feeding and so on. As we told before they interact directly with the components data.
Most system purposes are quite obvious.
A couple of them are exceptions [exèpscion].
The spawner system keeps a constant number of entities per type (we actually scrapped a complex reproduction system during development [divèlopment] to keep things simple)
We even have our very own alien [ellien] anti-pattern, the entity manager. It’s very useful because it allows to manage and search entity lists but it is alien in the sense that most systems hold a reference to it.
Here’s the snippet for the plant grow system. Still compact and short.
[A]
We refactored the monolithic code and extracted the rendering code that was bloating the logic part.
To achieve that, we introduced a distributed architecture with a central message bus (actually an instance of RabbitMQ) that allows communication among parts.
A websocket endpoint listens to the simulation state updates and pushes them to a beautiful HTML5 rendering app.
[A]
In a later step, with a basic system in place, we moved out agent logic from the core systems to independent [indipe’ndent] processes [pro’cessis] to scale performance.
Each agent is an individual intelligent entity that runs its own logic and communicates with the simulation via messages.
A spawner process manages their lifecycle, as always via the message bus.
[A]
Let’s take a tour of the agent.
The Agent [égent] is probably the most interesting aspect of God. It is the mastermind behind an intelligent entity behavior.
[A]
Every simulation game tick agents receive the whole [hole] world state via the message bus. Then they decide the action to perform in the following tick, after having swept the world with sensors.
Sensors measure presence and distance of plants, world boundaries and other agents and ensure that an agent is “fair” (e.g. it should not head for a plant if it cannot see it).
[A]
We implemented two totally different AIs. [eiais]
The first one is rule-based. There is set of rules that basically cover the various possible situations encountered by an agent. An inference system then selects the correct rule, based on the world state and outputs the rule’s action.
For instance it is as simple as “If I’m starving then i must eat”.
There are rules to feed, to “defend” a territory [te’rritory] and to explore in case there are no resources in the vicinity.
On the right you can see a simple pseudocode snippet with a basic rule-based behavior.
This AI implementation is easy to code and allows a fast prototyping [prototipa’in] but nonetheless gives an acceptable [axe’ctabol] agent behavior.
[D]
The second approach is much more complex and it involves the use of neural networks to create cpu-heavy agents.
What are neural networks in a few words?
Artificial neural networks are statistical learning models, inspired by biological models, and are tools in machine learning. They are composed by sets of interconnected small units, called neurons, organized in layers.
Values are presented to the network via the input layer, which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted connections. The hidden layers then link to an output layer where the final answer of the network is given. This flow from the input layer to the output layer is called forward propagation.
But there is another mode of operation, where the network (actually the output layer) is presented with couples of desired input/output to calibrate the weights of the connections. This is called training (or backpropagation).
We adopted deep learning algorithms to operate on our deep networks. They are called “deep” because they have many hidden layers, needed to tackle complex tasks, lime image recognition or action planning.
[D]
The training of our networks has 2 distinctive traits:
* it is done online, that is the network explores the graph of possible actions while the world is evolving, striking a balance between exploration (of new knowledge) and exploitation (of prior knowledge).
* it uses reinforcement learning: our network has no handcrafted knowledge model of the environment. No correct input/output pairs are presented to the network during training, on the contrary a reward is awarded for every action taken. The network then tries to maximize the reward received on a chain of consecutive actions. This algorithm is called Q-learning.
How does it work in layman terms?
“I’m starving? Let’s find a plant to eat it”. Good, reward this behavior!
“I’m starving? Why don’t just stand still?” Bad, discourage it with a negative reward!
Training is very time-consuming: the choice of the topology of the network and the tuning of the deep learning hyper-parameters (learning rate, batch size, etc) took several days. The training of the final network alone was particularly cpu intensive and took over half a day.
[D]
As libraries go, we were quite lucky. We started with Synaptics.js that runs on both browser and node. But it doesn’t support reinforcement learning natively.
In the end we chose ConvnetJs, a library maintained by a Stanford PhD. Sadly there’s no node package.
As you can see in the snippet below the base implementation using convnetjs is really straightforward; ...but don’t be fooled, tuning the rewards is not!
[A]
Starting with our initial objectives we winded up to a distributed simulation of a multi-agent world, where the agents are entitled to make decisions.
Last, but not least our project sports a pretty 5-year-old child graphics.
[A]
Ok, now it’s demo time. If you are interested in the code you can catch up with us later after the talk, we will gladly show it to you.
[start the simulation…]
Behold God… [D] this is the part where you should say ‘Owwwww’!
The rectangle you see is in fact the simulated world. You can see two different kinds of circles [soercol] there:
the green ones, that we can think of as food sources or plants. They’re static and not intelligent at all.
the red ones, on the other end, can be imagined as animals that strive to stay alive while interacting with the world. They are the agents, the intelligent stuff of the simulation.
True to our requirements the graphics is of a really good level ;)
Behaviors!
[A]
We’ll put the simulation in the background for a while and give you a few quantitative measures of the core project.
As we saw before, for every frame the simulation updates the world physics, runs the entities actions and pushes the world state serialized to the message bus.
We charted the time taken by these three stages in a single frame with respect to the number of entities.
As you can see on the left, physics and logic at one point become [bicam] superlinear, bogging down the simulation. Therefore [derfòr] a sustainable number of entities, on our reference system, is in the order of a few thousands.
On the other hand, and you can see this more keenly on the right side chart, the serialization is quite time-consuming at the beginning, but become irrelevant over 10 thousands entities.
[A]
We’ll put the simulation in the background for a while and give you a few quantitative measures of the core project.
As we saw before, for every frame the simulation updates the world physics, runs the entities actions and pushes the world state serialized to the message bus.
We charted the time taken by these three stages in a single frame with respect to the number of entities.
As you can see on the left, physics and logic at one point become [bicam] superlinear, bogging down the simulation. Therefore [derfòr] a sustainable number of entities, on our reference system, is in the order of a few thousands.
On the other hand, and you can see this more keenly on the right side chart, the serialization is quite time-consuming at the beginning, but become irrelevant over 10 thousands entities.
[A]
We’ll put the simulation in the background for a while and give you a few quantitative measures of the core project.
As we saw before, for every frame the simulation updates the world physics, runs the entities actions and pushes the world state serialized to the message bus.
We charted the time taken by these three stages in a single frame with respect to the number of entities.
As you can see on the left, physics and logic at one point become [bicam] superlinear, bogging down the simulation. Therefore [derfòr] a sustainable number of entities, on our reference system, is in the order of a few thousands.
On the other hand, and you can see this more keenly on the right side chart, the serialization is quite time-consuming at the beginning, but become irrelevant over 10 thousands entities.
[A]
Similarly [Sìmilarly] you can see some measures for the neural network agent.
We got a linear behavior for sensor scanning and deserialization with respect to the entities count.
The training/decision is quite expensive and is proportional to the size and topology of the tested network, irrespective to the number of entities.
The deserialization takes the major share of the cpu time, as soon as the number of entities grows.
On the right we tried to fit as many agent as possible on the reference system and this is the result. We succeeded to squeeze about 4-5 agents.
[A]
Similarly [Sìmilarly] you can see some measures for the neural network agent.
We got a linear behavior for sensor scanning and deserialization with respect to the entities count.
The training/decision is quite expensive and is proportional to the size and topology of the tested network, irrespective to the number of entities.
The deserialization takes the major share of the cpu time, as soon as the number of entities grows.
On the right we tried to fit as many agent as possible on the reference system and this is the result. We succeeded to squeeze about 4-5 agents.
[A]
Similarly [Sìmilarly] you can see some measures for the neural network agent.
We got a linear behavior for sensor scanning and deserialization with respect to the entities count.
The training/decision is quite expensive and is proportional to the size and topology of the tested network, irrespective to the number of entities.
The deserialization takes the major share of the cpu time, as soon as the number of entities grows.
On the right we tried to fit as many agent as possible on the reference system and this is the result. We succeeded to squeeze about 4-5 agents.
[D]
So, here’s the recap. The simulation is running in the background with around 250 entities plus 15 agents. And the cpu is nearly at full utilization.
What about constraints?
The first one we battled with is the message bus bandwidth. Every frame we post a JSON [gei-son] with the full world state. This JSON is not optimized and is uncompressed and it can quickly weight a few MBs.
In any case, for large worlds it can quickly become a bottleneck.
Second, if the agent logic is very complex, the simulation framerate must be balanced accordingly [accòrdingly], otherwise an agent will lag [laag] behind and become unresponsive.
[D]
Well, we are now running a performance capped simulation. How to improve?
First: we could, of course, simply increase the number of spawner processes to distribute even more horizontally the number of agents.
But probably, second, the most impacting refactoring would be to shard the core logic, probably together with spatial partitioning of the simulated world.
However, this would need a major rewriting of the game loop, physics and entities sub-systems.
[A]
Conclusions,
Yeah, node in the end was really a good choice for this kind of project.
We were supported by the strengths of the language and its wide ecosystem (e.g. neural network libraries)
We exploited an event-driven design while taxing the CPU with work
In fact, the sim spends more time doing CPU work than I/O
The trick is to find the sweet spot between the computation time and the event rate to avoid starvation
In the end, nothing revolutionary but yes, you can do it with Node.js.
[D]
Ok, it was all really nice by what of it? We will leave you with a few ideas...
We already used node in production in the past for multimedia processing in worker pools to convert audio/video streams with ffmpeg, but it can be used in many other contexts.
[choice…]
remember: prototype
[D]
Thank you, we are open to questions and we hope you enjoyed our talk.