We’ve created some guidelines to help you use our brand and assets, including our logo, content and trademarks, without having to negotiate legal agreements for each use. To make any use of our marks in a way not covered by these guidelines, please contact us at brand@hashicorp.com and include a visual mockup of intended use.
Hashicorp's understanding of Digital Transformation, and the different areas that require specific solutions. Provides an overview of each of the tools - Terraform, Consul, Vault and Nomad
Can and should Apache Kafka replace a database? How long can and should I store data in Kafka? How can I query and process data in Kafka? These are common questions that come up more and more. This session explains the idea behind databases and different features like storage, queries, transactions, and processing to evaluate when Kafka is a good fit and when it is not.
The discussion includes different Kafka-native add-ons like Tiered Storage for long-term, cost-efficient storage and ksqlDB as event streaming database. The relation and trade-offs between Kafka and other databases are explored to complement each other instead of thinking about a replacement. This includes different options for pull and push-based bi-directional integration.
Key takeaways:
- Kafka can store data forever in a durable and high available manner
- Kafka has different options to query historical data
- Kafka-native add-ons like ksqlDB or Tiered Storage make Kafka more powerful than ever before to store and process data
- Kafka does not provide transactions, but exactly-once semantics
- Kafka is not a replacement for existing databases like MySQL, MongoDB or Elasticsearch
- Kafka and other databases complement each other; the right solution has to be selected for a problem
- Different options are available for bi-directional pull and push-based integration between Kafka and databases to complement each other
Video Recording:
https://youtu.be/7KEkWbwefqQ
Blog post:
https://www.kai-waehner.de/blog/2020/03/12/can-apache-kafka-replace-database-acid-storage-transactions-sql-nosql-data-lake/
Spotify in the Cloud - An evolution of data infrastructure - Strata NYCJosh Baer
Slides from a presentation given by Alison Gilles and Josh Baer during StrataNYC 2017.
Covers the decision, challenge and strategy (technical, organizational, people) for converting Spotify's 2500 node Hadoop cluster's worth of data and processing to Google Cloud.
Finally, touches on Spotify's resulting infrastructure on GCP.
Hortonworks Data in Motion Webinar Series Part 7 Apache Kafka Nifi Better Tog...Hortonworks
Apache NiFi, Storm and Kafka augment each other in modern enterprise architectures. NiFi provides a coding free solution to get many different formats and protocols in and out of Kafka and compliments Kafka with full audit trails and interactive command and control. Storm compliments NiFi with the capability to handle complex event processing.
Join us to learn how Apache NiFi, Storm and Kafka can augment each other for creating a new dataplane connecting multiple systems within your enterprise with ease, speed and increased productivity.
https://www.brighttalk.com/webcast/9573/224063
Best Practices for Running Oracle Databases on Amazon RDS (DAT317) - AWS re:I...Amazon Web Services
Amazon Relational Database Service (Amazon RDS) continues to be a popular choice for Oracle DBAs moving new and legacy workloads to the cloud. In this session, we discuss how Amazon RDS for Oracle helps DBAs focus their time where it matters most. We cover recent RDS Oracle features, and we go deep on key functionality that enables license optimization, performance, and high availability for Oracle databases. We also hear directly from an AWS customer about their journey to Amazon RDS and the best practices that helped make their move successful.
Hashicorp's understanding of Digital Transformation, and the different areas that require specific solutions. Provides an overview of each of the tools - Terraform, Consul, Vault and Nomad
Can and should Apache Kafka replace a database? How long can and should I store data in Kafka? How can I query and process data in Kafka? These are common questions that come up more and more. This session explains the idea behind databases and different features like storage, queries, transactions, and processing to evaluate when Kafka is a good fit and when it is not.
The discussion includes different Kafka-native add-ons like Tiered Storage for long-term, cost-efficient storage and ksqlDB as event streaming database. The relation and trade-offs between Kafka and other databases are explored to complement each other instead of thinking about a replacement. This includes different options for pull and push-based bi-directional integration.
Key takeaways:
- Kafka can store data forever in a durable and high available manner
- Kafka has different options to query historical data
- Kafka-native add-ons like ksqlDB or Tiered Storage make Kafka more powerful than ever before to store and process data
- Kafka does not provide transactions, but exactly-once semantics
- Kafka is not a replacement for existing databases like MySQL, MongoDB or Elasticsearch
- Kafka and other databases complement each other; the right solution has to be selected for a problem
- Different options are available for bi-directional pull and push-based integration between Kafka and databases to complement each other
Video Recording:
https://youtu.be/7KEkWbwefqQ
Blog post:
https://www.kai-waehner.de/blog/2020/03/12/can-apache-kafka-replace-database-acid-storage-transactions-sql-nosql-data-lake/
Spotify in the Cloud - An evolution of data infrastructure - Strata NYCJosh Baer
Slides from a presentation given by Alison Gilles and Josh Baer during StrataNYC 2017.
Covers the decision, challenge and strategy (technical, organizational, people) for converting Spotify's 2500 node Hadoop cluster's worth of data and processing to Google Cloud.
Finally, touches on Spotify's resulting infrastructure on GCP.
Hortonworks Data in Motion Webinar Series Part 7 Apache Kafka Nifi Better Tog...Hortonworks
Apache NiFi, Storm and Kafka augment each other in modern enterprise architectures. NiFi provides a coding free solution to get many different formats and protocols in and out of Kafka and compliments Kafka with full audit trails and interactive command and control. Storm compliments NiFi with the capability to handle complex event processing.
Join us to learn how Apache NiFi, Storm and Kafka can augment each other for creating a new dataplane connecting multiple systems within your enterprise with ease, speed and increased productivity.
https://www.brighttalk.com/webcast/9573/224063
Best Practices for Running Oracle Databases on Amazon RDS (DAT317) - AWS re:I...Amazon Web Services
Amazon Relational Database Service (Amazon RDS) continues to be a popular choice for Oracle DBAs moving new and legacy workloads to the cloud. In this session, we discuss how Amazon RDS for Oracle helps DBAs focus their time where it matters most. We cover recent RDS Oracle features, and we go deep on key functionality that enables license optimization, performance, and high availability for Oracle databases. We also hear directly from an AWS customer about their journey to Amazon RDS and the best practices that helped make their move successful.
Amazon Elastic Fabric Adapter: Anatomy, Capabilities, and the Road Aheadinside-BigData.com
In this deck from the 2019 OpenFabrics Workshop in Austin, Raghu Raja from Amazon presents: Amazon Elastic Fabric Adapter: Anatomy, Capabilities, and the Road Ahead.
Elastic Fabric Adapter (EFA) is the recently announced HPC networking offering from Amazon for EC2 instances. It allows applications such as MPI to communicate using the Scalable Reliable Datagram (SRD) protocol that provides connectionless and unordered messaging services directly in userspace, bypassing both the operating system kernel and the Virtual Machine hypervisor. This talk presents the designs, capabilities, and an early performance characterization of the userspace and kernel components of the EFA software stack. This includes the open-source EFA libfabric provider, the generic RDM-over-RDM (RxR) utility provider that extends the capabilities of EFA, and the device driver itself. The talk will also discuss some of Amazon's recent contributions to libfabric core and future plans."
Watch the video: https://wp.me/p3RLHQ-k2I
Learn more: https://www.openfabrics.org/2019-workshop-agenda-and-abstracts/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This session is focused on the Hashicorp vault which is a secret management tool. We can manage secrets for 2-3 environments but what if we have more than 10 environments, then it will become a very painful task to manage them when secrets are dynamic and need to be rotated after some time. Hashicorp vault can easily manage secrets for both static and dynamic also it can help in secret rotations.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data and analytics application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
In this one-hour webinar, we will look at the portfolio of AWS Big Data services and how they can be used to build a modern data architecture.
We will cover:
Using different SQL engines to analyze large amounts of structured data
Analysing streaming data in near-real time
Architectures for batch processing
Best practices for Data Lake architectures
This session is suited for:
Solution and enterprise architects
Data architects/ Data warehouse owners
IT & Innovation team members
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud (Hadoop / Spark Conference Japan 2019)
# English version #
http://hadoop.apache.jp/hcj2019-program/
This AWS Certification tutorial shall explain you all the certifications offered by AWS, the important topics to learn and the exam pattern. It will also talk about the job trends and the demand for each certification in the market. This AWS Certification tutorial is ideal for those who want to become an AWS Certified Professional.
Below are the topics covered in this tutorial:
1. Amazon Web Services
2. AWS Job Trends
3. AWS Certifications
4. AWS Exam
5. How to Prepare for your AWS Exam?
6. AWS Learning Path
#awscertification #amazoncloud #awstraining #awsjobs
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Learn how ScyllaDB Cloud is moving to serverless, transforming its single tenant deployment model into a multi-tenant architecture based on Kubernetes. Discover the engineering innovation required, and the user value of the new architecture, including use of encryption (both at flight and at rest), performance isolation, and the capability to scale elastically.
As part of this presentation we covered basics of Terraform which is Infrastructure as code. It will helps to Devops teams to start with Terraform.
This document will be helpful for the development who wants to understand infrastructure as code concepts and if they want to understand the usability of terrform
Pharo est un langage purement orienté objet, entièrement réflexif et dynamique, avec une syntaxe simple qui tient sur une carte postale. Il offre une très forte productivité due à l'interactivité et à la proximité des objets donnée aux programmeurs, ainsi qu'à la rapidité de sa prise en main.
Inspiré par Smalltalk, Pharo définit un ensemble d'outils d'analyse de codes adaptables facilement personnalisables par le programmeur.
Pharo vous plonge en interaction constante avec les objets en cours d'exécution, qu'il s'agisse d'objets simples, de GPU, d'applications Web, d'objets 3D ou de vos propres classes, pour une totale immersion et maîtrise du code.
Nous vous dresserons un bref tour d'horizon des réalisations des startups en France et à l'international.
Une occasion unique de découvrir un langage différent et d'exploiter rapidement ses nouvelles potentialités de développement !
Pour aller plus loin :
- http://books.pharo.org/
- http://consortium.pharo.org/
- http://mooc.pharo.org/
- https://pharoweekly.wordpress.com/
Amazon Elastic Fabric Adapter: Anatomy, Capabilities, and the Road Aheadinside-BigData.com
In this deck from the 2019 OpenFabrics Workshop in Austin, Raghu Raja from Amazon presents: Amazon Elastic Fabric Adapter: Anatomy, Capabilities, and the Road Ahead.
Elastic Fabric Adapter (EFA) is the recently announced HPC networking offering from Amazon for EC2 instances. It allows applications such as MPI to communicate using the Scalable Reliable Datagram (SRD) protocol that provides connectionless and unordered messaging services directly in userspace, bypassing both the operating system kernel and the Virtual Machine hypervisor. This talk presents the designs, capabilities, and an early performance characterization of the userspace and kernel components of the EFA software stack. This includes the open-source EFA libfabric provider, the generic RDM-over-RDM (RxR) utility provider that extends the capabilities of EFA, and the device driver itself. The talk will also discuss some of Amazon's recent contributions to libfabric core and future plans."
Watch the video: https://wp.me/p3RLHQ-k2I
Learn more: https://www.openfabrics.org/2019-workshop-agenda-and-abstracts/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This session is focused on the Hashicorp vault which is a secret management tool. We can manage secrets for 2-3 environments but what if we have more than 10 environments, then it will become a very painful task to manage them when secrets are dynamic and need to be rotated after some time. Hashicorp vault can easily manage secrets for both static and dynamic also it can help in secret rotations.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data and analytics application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
In this one-hour webinar, we will look at the portfolio of AWS Big Data services and how they can be used to build a modern data architecture.
We will cover:
Using different SQL engines to analyze large amounts of structured data
Analysing streaming data in near-real time
Architectures for batch processing
Best practices for Data Lake architectures
This session is suited for:
Solution and enterprise architects
Data architects/ Data warehouse owners
IT & Innovation team members
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud (Hadoop / Spark Conference Japan 2019)
# English version #
http://hadoop.apache.jp/hcj2019-program/
This AWS Certification tutorial shall explain you all the certifications offered by AWS, the important topics to learn and the exam pattern. It will also talk about the job trends and the demand for each certification in the market. This AWS Certification tutorial is ideal for those who want to become an AWS Certified Professional.
Below are the topics covered in this tutorial:
1. Amazon Web Services
2. AWS Job Trends
3. AWS Certifications
4. AWS Exam
5. How to Prepare for your AWS Exam?
6. AWS Learning Path
#awscertification #amazoncloud #awstraining #awsjobs
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Learn how ScyllaDB Cloud is moving to serverless, transforming its single tenant deployment model into a multi-tenant architecture based on Kubernetes. Discover the engineering innovation required, and the user value of the new architecture, including use of encryption (both at flight and at rest), performance isolation, and the capability to scale elastically.
As part of this presentation we covered basics of Terraform which is Infrastructure as code. It will helps to Devops teams to start with Terraform.
This document will be helpful for the development who wants to understand infrastructure as code concepts and if they want to understand the usability of terrform
Pharo est un langage purement orienté objet, entièrement réflexif et dynamique, avec une syntaxe simple qui tient sur une carte postale. Il offre une très forte productivité due à l'interactivité et à la proximité des objets donnée aux programmeurs, ainsi qu'à la rapidité de sa prise en main.
Inspiré par Smalltalk, Pharo définit un ensemble d'outils d'analyse de codes adaptables facilement personnalisables par le programmeur.
Pharo vous plonge en interaction constante avec les objets en cours d'exécution, qu'il s'agisse d'objets simples, de GPU, d'applications Web, d'objets 3D ou de vos propres classes, pour une totale immersion et maîtrise du code.
Nous vous dresserons un bref tour d'horizon des réalisations des startups en France et à l'international.
Une occasion unique de découvrir un langage différent et d'exploiter rapidement ses nouvelles potentialités de développement !
Pour aller plus loin :
- http://books.pharo.org/
- http://consortium.pharo.org/
- http://mooc.pharo.org/
- https://pharoweekly.wordpress.com/
Published July 26th, 2017
A slightly edited version of the Wearables slide deck.
Presented to entire Liquid Studio Team as part of the weekly studio sessions.
Accenture | Liquid Studio
Wearables Team
Summer 2016 Intern
---
FVCproductions
https://fvcproductions.com
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
From Zero to Cloud: Revolutionize your Application Life Cycle with OpenShift ...OpenShift Origin
From Zero to Cloud: Revolutionize your Application Life Cycle with OpenShift PaaS
Talk given by Diane Mueller, OpenShift Origin Community Manager at FISL 15 on May 9th, 2014
Building frameworks: from concept to completionRuben Goncalves
What are considerations when building a framework/library? How does that apply to OutSystems components? In this session, we’ll do a deep dive into the importance of addressing certain concepts like code granularity, and architecture, in order to create useful, future-proof and coherent frameworks that deliver the best possible developer experience.
Exploring the world of Open Source Design, looking at what designers are doing with open source tools like gimp, inkscape, and blender3d. We also look at how designers get creative with interface design using various designer friendly open source languages like CSS, PHP, JS, and more.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
7. The Tao of HashiCorp is the foundation that guides our
vision, roadmap, and product design. It is valuable to
understand the motivations and intentions behind our
products.
Infrastructure as Code
The belief that all processes should be
written as code, stored, and versioned.
Operations teams have historically
relied on oral tradition to pass along the
knowledge of how to build, upgrade and
triage infrastructure.
Product design starts with an
envisioned workflow to achieve a set
goal. We then identify existing tools that
simplify the workflow. If a sufficient tool
does not exist, we step in to build it.
This approach prefers many smaller
components with well defined scopes that
can be used together. The alternative
approach is monolithic, in which a single
tool has a nebulous scope that expands to
encompass new features and capabilities.
Workflows not Technologies Modular and Open
TA O 0 2OVERVIEW
8. TA O 0 2INFRASTRUCTURE AS CODE
Infrastructure and code are central figures in
design at HashiCorp. It allows teams to
collaborate on infrastructure. This is where the
complexity begins. An operator writes code
which will be transformed. It is the systems we
design that allow teams to create and
orchestrate infrastructure at enterprise scale.
HashiCorp Vagrant is where our company’s
journey started with infrastructure as code.
Vagrant is also where design at HashiCorp
started. In systems navigated by workflows it is
where an individual users journey starts as well.
Understand the Design
9. Workflows at each layer of The Stack —
provisioning, security, and runtime — provide
teams the ability to work in parallel allowing
greater application delivery speed and, as a
result, exponential business value.
Workflows contain 3 stages — Write, Test, and
Create. The aforementioned layers of The Stack
encompass the Test stage. For teams, our
products tame complexity arriving from the
Write stage ultimately leading to further
generation of complexity in the form of
provisioned infrastructure, secure access, or
deployed applications in the Create phase.
COMPLEXITY COMPLEXITYSIMPLICITY
Write CreateTest
Understand the Design
TA O 0 2WORKFLOWS
10. VMCONTAINER
APPLICATION
ANY
Private
Cloud
Baremetal
INFRASTRUCTURE
ANY
TA O 0 2MODULAR AND OPEN
HashiCorp embraces the open and extensible
nature of the DevOps ecosystem of partners and
technology. Diamonds are stacked. Mosiacs are
found in our patterns. Inidivudal units are
atomic, interchangable, and composable.
Understand the Design
12. LOGO 0 3L O G O
L O G O S0 1
D E V O P S D E L I V E R E D
I C O N
TA G L I N E
S I G N AT U R E
LO G O
W O R D M A R K
Lorem ipsum dolor sit amet,
consectetur adipiscing elit. Cras sodales
sem a neque porta lobortis. Curabitur
egestas pellentesque purus vitae
malesuada. Vestibulum pellentesque
mauris sed ante laoreet. Nunc velit orci,
feugiat sed tortor ut, mattis pulvinar
lectus.
13. L O G O S0 1
L O G O
1
2 3
HashiCorp Vertical
Alternate orientation where a square is
needed
2
HashiCorp Symbol
Our icon
3
HashiCorp White
This is the primary logo. Used on dark
backgrounds
1
Inverted
Use the inverted, black versions on light
backgrounds
0 3L O G OUSAGE
1
2 3
Logo Vertical
Alternate orientation where a square is
needed
2
HashiCorp Icon
Our icon
3
Logo White
This is the primary logo. Used on dark
backgrounds
1
Inverted
Use the inverted, black versions on light
backgrounds
14. RESEARCH
The most important addition to our
brand hierarchy is the HashiCorp
attribution. We employ this attribution so
that users of our products are aware they
are part of the HashiCorp Suite.
Parent
Division
Product
Attributed
Product
Enterprise
Product
Consul
ENTERPRISE
Nomad
Terraform
BRAND HIERARCHY 0 3L O G O
15. 0 3L O G OCLEAR SPACE
Clear space is defined by the width of the
space between the HashiCorp icon and our
wordmark. The same spacing rules are
applied to the product logos.
Consul
17. Full Color
Prefer the open souce logo using the accent color and black
text on a light background
Vagrant
Tonal reversed
Use on dark backgrounds and when used in combination with other
HashiCorp tools. For instance, when showed in the HashiCorp Suite.
Vagrant
Lorem ipsum dolor sit amet, consectetur
adipiscing elit. Cras sodales sem a neque
porta lobortis. Curabitur egestas
pellentesque purus vitae malesuada.
Vestibulum pellentesque mauris sed ante
laoreet. Nunc velit orci, feugiat sed tortor
ut, mattis pulvinar lectus.
L O G O S0 1
0 4P R O D U C T L O G O SVAGRANT
Full Color
Vagrant
Vagrant
Monchrome Tonal
Monochrome One Color
Vagrant
Vagrant
Vagrant
24. OPEN SOURCE
HashiCorp Suite
All of HashiCorp’s foundational technologies are open
source. Our tools are designed to address the realities of
datacenter heterogeneity: physical machines, virtual
machines, containers, serverless architectures and
whatever comes after that. Our focus is on workflows,
not technologies.
25. HashiCorp Vertical
Alternate orientation where a square is
needed
2
HashiCorp Symbol
Our icon
3
HashiCorp White
This is the primary logo. Used on dark
backgrounds
1Monochrome flat logos
Enterprise logos uses monochrome flat
white logos on a dark background
Notes
The HashiCorp suite empowers organizations to provision hybrid
infrastructure, secure secrets across distributed applications, and run
dynamic resources for modern applications.
ProductSuite
ENTERPRISE
Terraform Vault Nomad Consul
0 4S U I T EENTERPRISE
Often the enterprise horizontal strap will be
followed by in depth dives of each product.
When this is the case we can forego the
product wordmarks. In this case the logos
form a compand strap icon.
26. All of HashiCorp’s foundational technologies are open source. Our tools
are designed to address the realities of datacenter heterogeneity:
physical machines, virtual machines, containers, serverless
architectures and whatever comes after that. Our focus is on workflows,
not technologies.
ToolSuite
OPEN SOURCE
OPEN SOURCE
Full Color Logos
Use full color logos when open source is
the context and
a white background is necessary
doing deep dives of inidividual tools. For
instance, giving a talk on HashiCorp Consul
and want to put it in context of the DevOps
Tool Suite
talking about tool integrations like
HashiCorp Packer and Terraform
Notes
vertical logo orientation laying out the
products in a horizontal strap
The HashiCorp logo should be present in
combination with the strap. When doing
this there is no need to use HashiCorp
attribution on each product logo
0 1S U I T E
ConsulNomadVaultTerraformPackerVagrant
27. All of HashiCorp’s foundational technologies are open source. Our tools
are designed to address the realities of datacenter heterogeneity:
physical machines, virtual machines, containers, serverless
architectures and whatever comes after that. Our focus is on workflows,
not technologies.
ToolSuite
OPEN SOURCE
ConsulNomadVaultTerraformPackerVagrant
OPEN SOURCE
Monochrome Tonal Logo
Use monochrome tonal logos when the
context is HashiCorp and
a strong corporate brand presence is
desired
the focus is the suite integrations
0 1S U I T E
Notes
30. THEMES
C O L O R 0 2
Use a product color theme for brand
recognition and impact.
Color facilitates recognition and
promotes brand equity. The products
very deliberately have their own
personalities which can be used
effectively individually. Use the
HashiCorp primary color palette of
black, white, and blue to present unity.
80% brightness
40% saturation
16% saturation
Know your audience.
Developers find themselves in
code editors all day.
HashiCorp uses dark themes
heavily as a homage to hacker
culture.
Light themes are more
appropriate, even necessary,
in decks and print. Long form
content, whether print or
digital, should be on a light
theme.
Dark UI’s are trendy and
elegant. Decks are a good
example of combo usage
where we set the tone with a
dark cover slide and
immediately switch to light for
content.
Persona Medium Emotion
Consider when choosing a theme
P R O D U C T C O L O R
P R O D U C T C O L O RD A R K L I G H T
31. Body copy slightly contrasted. Lorem ipsum dolor sit amet,
consectetur adipiscing elit. Nunc venenatis libero quis mi euismod
cursus.
R E F E R E N C E I M P L E M E N TAT I O N
Boxed content
Boxed content
C O L O R 0 2THEMES
The corporate color palette was created to work equally well
on dark and light themes. The dark and light theme palettes
were themselves created to span corporate and product.
This document is itself an example of theme application and
should be used as a catalog of usage.
D A R K
#202831
Black Pearl
#000000
Black
#506379
Dark Electric Blue
#B3BAC6 Body copy
Headlines
Medium contrast
Low contrast
Base color
Cadet Blue
#FFF
White
Headline
Dark vs Light
32. Body copy slightly contrasted. Lorem ipsum dolor sit amet,
consectetur adipiscing elit. Nunc venenatis libero quis mi euismod
cursus.
Headline
R E F E R E N C E I M P L E M E N TAT I O N
Boxed content
Boxed content
Body copy
Headlines
Low contrast
Base color#FFFFFF
White
#000
Black
#506379
Dark Electric Blue
Medium contrast
L I G H T
#F7F8FB
White Lilac
#4F5258
Trout
THEMES C O L O R 0 2
34. Light
Regular
Medium
Bold
Klavika
Light
Regular
Semibold
Bold
Open Sans
S T Y L E S T Y P E S C A L E
Klavika
Aa
P R I M A R Y F O N T
S E C O N D A R Y F O N T
Open Sans
Aa
T Y P O G R A P H Y
Infrastructure - 42px
Infrastructure - 32px
Infrastructure - 24px
Infrastructure - 20px
Infrastructure - 15px
35. Regular
Medium
Bold
Fira Mono
S T Y L E S C O D E S A M P L E
Fira Mono
Aa
M O N O S P A C E
T Y P O G R A P H Y
job "docs" {
group "example" {
task "server" {
dispatch_payload {
file = "config.json"
}
}
}
}
37. G E O M E T R Y
The HashiCorp icon is set inside a hexagon. The hexagon, along with
our other core geometry—diamonds, parralelograms, circles
(circumscribed), and triangles—can be found in the isometrics grids that
underpin our design.
Hexagons are used as icons when depicting resouces. Hexagons are the
building blocks of infrastructure.
The term "diamond" is used in mathematics to refer to a rhombus. In
our case it is a projected square.
Diamonds atomicly represent applications. In total they can represent
micro service archecture. We often use filled shapes for apps, as
opposed to stroked, as a contrast to the stroked planes that
representing our products architecture and the infrastructure
underneath.
(Rhombus)
Isometric projection is a method for visually representing
three-dimensional objects in two dimensions. HashiCorp uses the
isometric grid used for this projection. There are 3 types of lines that
comprise this base grid: vertical lines, 30° lines to the right, and 30°
lines to the left. We make use of these lines, in different combinations,
for patterns and graphics.
ENTERPRISE
CONTROL PLANE
APPLICATION
w
h
30°
arctan =h
w[
[
30°
HE X AGON DIAMOND
48. P H O T O G R A P H Y
The HashiCorp suite empowers organizations to provision hybrid
infrastructure, secure secrets across distributed applications, and run
dynamic resources for modern applications.
ProductSuite
ENTERPRISE
Terraform Vault Nomad Consul
52. P H O T O G R A P H Y
Provision infrastructure and application resources
across public cloud, private cloud, and external services.
PROVISION HYBRID INFRASTRUCTURE
Terraform
54. HashiCorp Consul is a distributed, highly available, and
datacenter aware solution for service discovery.
Consul
ENTERPRISE
55. HashiCorp Nomad is a distributed, highly available,
datacenter-aware cluster manager and scheduler for the
purpose of deploying applications on any infrastructure,
at any scale.
DYNAMIC APPLICATION SCHEDULER
Nomad
ENTERPRISE
57. Iso Art
lllustration using the grid to produce line art. The
Designer lays out the grid and then ‘breaks’ it to assemble
art. Color can be combined to show movement or
highlight pieces or, like in the case of our cloud, you can
create complex iconography. Future Designers can play
with the forms to see how minimal and complex we can
go. Later we’ll animate.
Adopt Cloud