A plataforma Windows Azure abre espaço a desenvimento de aplicações utilizando o novo paradigma: "A Nuvem". Aplicações escaláveis, redundantes, e mais próximas do utilizador final. Isto tudo utilizando como base os conhecimentos que já tem e o novo Visual Studio 2010.
Node Summit 2016: Web App ArchitecturesChris Bailey
While Node.js is becoming the platform of choice for web-scale applications, enterprises are resistant to change and have legacy applications based on other technologies, typically Java. Emerging web application architectures bring together the web-scale and integrated browser characteristics of Node.js with the transactional nature of Java to deliver high-performance, engaging web applications. Learn how the complimentary characteristics of Node.js and Java are being used to build the next generation of web applications.
Mete Atamel "Resilient microservices with kubernetes"IT Event
Talk description: Creating a single microservice is a well understood problem. Creating a cluster of load-balanced microservices that are resilient and self-healing is not so easy. Managing that cluster with rollouts and rollbacks, scaling individual services on demand, securely sharing secrets and configuration among services is even harder.
Building A Production-Level Machine Learning PipelineRobert Dempsey
With so many options to choose from how do you select the right technologies to use for your machine learning pipeline? Do you purchase bare metal and hire a devops team, install Spark on EC2 instances, use EMR and other AWS services, combine Spark and Elasticsearch?! View this talk to get a first-hand experience of building ML pipelines: what options were looked at, how the final solution was selected, the tradeoffs made and the final results.
Any startup has to have a clear go-to-market strategy from the beginning. Similarly, any data science project has to have a go-to-production strategy from its first days, so it could go beyond proof-of-concept. Machine learning and artificial intelligence in production would result in hundreds of training pipelines and machine learning models that are continuously revised by teams of data scientists and seamlessly connected with web applications for tenants and users.
In this demo-based talk we will walk through the best practices for simplifying machine learning operations across the enterprise and providing a serverless abstraction for data scientists and data engineers, so they could train, deploy and monitor machine learning models faster and with better quality.
Developing ML-enabled Data Pipelines on Databricks using IDE & CI/CD at Runta...Databricks
Data & ML projects bring many new complexities beyond the traditional software development lifecycle. Unlike software projects, after they were successfully delivered and deployed, they cannot be abandoned but must be continuously monitored if model performance still satisfies all requirements. We can always get new data with new statistical characteristics that can break our pipelines or influence model performance.
Node Summit 2016: Web App ArchitecturesChris Bailey
While Node.js is becoming the platform of choice for web-scale applications, enterprises are resistant to change and have legacy applications based on other technologies, typically Java. Emerging web application architectures bring together the web-scale and integrated browser characteristics of Node.js with the transactional nature of Java to deliver high-performance, engaging web applications. Learn how the complimentary characteristics of Node.js and Java are being used to build the next generation of web applications.
Mete Atamel "Resilient microservices with kubernetes"IT Event
Talk description: Creating a single microservice is a well understood problem. Creating a cluster of load-balanced microservices that are resilient and self-healing is not so easy. Managing that cluster with rollouts and rollbacks, scaling individual services on demand, securely sharing secrets and configuration among services is even harder.
Building A Production-Level Machine Learning PipelineRobert Dempsey
With so many options to choose from how do you select the right technologies to use for your machine learning pipeline? Do you purchase bare metal and hire a devops team, install Spark on EC2 instances, use EMR and other AWS services, combine Spark and Elasticsearch?! View this talk to get a first-hand experience of building ML pipelines: what options were looked at, how the final solution was selected, the tradeoffs made and the final results.
Any startup has to have a clear go-to-market strategy from the beginning. Similarly, any data science project has to have a go-to-production strategy from its first days, so it could go beyond proof-of-concept. Machine learning and artificial intelligence in production would result in hundreds of training pipelines and machine learning models that are continuously revised by teams of data scientists and seamlessly connected with web applications for tenants and users.
In this demo-based talk we will walk through the best practices for simplifying machine learning operations across the enterprise and providing a serverless abstraction for data scientists and data engineers, so they could train, deploy and monitor machine learning models faster and with better quality.
Developing ML-enabled Data Pipelines on Databricks using IDE & CI/CD at Runta...Databricks
Data & ML projects bring many new complexities beyond the traditional software development lifecycle. Unlike software projects, after they were successfully delivered and deployed, they cannot be abandoned but must be continuously monitored if model performance still satisfies all requirements. We can always get new data with new statistical characteristics that can break our pipelines or influence model performance.
Scaling up Deep Learning by Scaling DownDatabricks
In the last few years, deep learning has achieved dramatic success in a wide range of domains, including computer vision, artificial intelligence, speech recognition, natural language processing and reinforcement learning.
"Why we all build bad architectures and how to stop doing it", Vova KyrychenkoFwdays
We will look through common mistakes in approaches of large system architecture that lead to serious and even catastrophic consequences for the business. Interesting real disaster showcases and analysis of their causes from people, who are professionally engaged in technical due diligence of companies and work as consultants in the field of problem architectures will be presented.
Teams often waste effort on such useless things as integration tests and maintaining multiple nonproduction environments. Moving to an only-production viewpoint would save countless engineering cycles and put effort where it matters.
In this session we will discuss why eliminating nonproduction environments is not such a crazy idea. We will review tools and practices that would help teams to deliver their services with confidence and much faster than with standard approaches.
Measure and Increase Developer Productivity with Help of Serverless at AWS Co...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
Join us for a deep dive into Windows Azure. We’ll start with a developer-focused overview of this brave new platform and the cloud computing services that can be used either together or independently to build amazing applications. As the day unfolds, we’ll explore data storage, SQL Azure™, and the basics of deployment with Windows Azure. Register today for these free, live sessions in your local area.
Building Cloud-Native Applications with Microsoft Windows AzureBill Wilder
Cloud computing is here to stay, and it is never too soon to begin understanding the impact it will have on application architecture. In this talk we will discuss the two most significant architectural mind-shifts, discussing the key patterns changes generally and seeing how these new cloud patterns map naturally into specific programming practices in Windows Azure. Specifically this relates to (a) Azure Roles and Queues and how to combine them using cloud-friendly design
patterns, and (b) the combination of relational data and non-relational data, how to decide among them, and how to combine them. The goal is for mere mortals to build highly reliable applications that scale economically. The concepts discussed in this talk are relevant for developers and architects building systems for the cloud today, or who want to be prepared to move to the cloud in the future.
This talk was delivered by Bill Wilder at the Vermont Code Camp 2 on 11-Sept-2010.
How a National Transportation Software Provider Migrated a Mission-Critical T...Amazon Web Services
In this webinar, Cascadeo will show you how they helped a national transportation software provider build an AWS architecture that enables them to effectively support more than 3,300 complex integration tests against nightly builds of their Interoperable Train Control Messaging (ITCM) application. You’ll also learn about how this software provider can scale on-demand, has improved governance and cost management, and rapidly supports new projects without increasing IT overhead using AWS.
AZUG.BE - Azure User Group Belgium - First public meetingMaarten Balliauw
- What is AZUG? Who is who?
- An overview of the Azure platform
- .NET Services
- Enterprise reasons to adopt the cloud
- Getting started with Azure
- Open discussion
Scaling up Deep Learning by Scaling DownDatabricks
In the last few years, deep learning has achieved dramatic success in a wide range of domains, including computer vision, artificial intelligence, speech recognition, natural language processing and reinforcement learning.
"Why we all build bad architectures and how to stop doing it", Vova KyrychenkoFwdays
We will look through common mistakes in approaches of large system architecture that lead to serious and even catastrophic consequences for the business. Interesting real disaster showcases and analysis of their causes from people, who are professionally engaged in technical due diligence of companies and work as consultants in the field of problem architectures will be presented.
Teams often waste effort on such useless things as integration tests and maintaining multiple nonproduction environments. Moving to an only-production viewpoint would save countless engineering cycles and put effort where it matters.
In this session we will discuss why eliminating nonproduction environments is not such a crazy idea. We will review tools and practices that would help teams to deliver their services with confidence and much faster than with standard approaches.
Measure and Increase Developer Productivity with Help of Serverless at AWS Co...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
Join us for a deep dive into Windows Azure. We’ll start with a developer-focused overview of this brave new platform and the cloud computing services that can be used either together or independently to build amazing applications. As the day unfolds, we’ll explore data storage, SQL Azure™, and the basics of deployment with Windows Azure. Register today for these free, live sessions in your local area.
Building Cloud-Native Applications with Microsoft Windows AzureBill Wilder
Cloud computing is here to stay, and it is never too soon to begin understanding the impact it will have on application architecture. In this talk we will discuss the two most significant architectural mind-shifts, discussing the key patterns changes generally and seeing how these new cloud patterns map naturally into specific programming practices in Windows Azure. Specifically this relates to (a) Azure Roles and Queues and how to combine them using cloud-friendly design
patterns, and (b) the combination of relational data and non-relational data, how to decide among them, and how to combine them. The goal is for mere mortals to build highly reliable applications that scale economically. The concepts discussed in this talk are relevant for developers and architects building systems for the cloud today, or who want to be prepared to move to the cloud in the future.
This talk was delivered by Bill Wilder at the Vermont Code Camp 2 on 11-Sept-2010.
How a National Transportation Software Provider Migrated a Mission-Critical T...Amazon Web Services
In this webinar, Cascadeo will show you how they helped a national transportation software provider build an AWS architecture that enables them to effectively support more than 3,300 complex integration tests against nightly builds of their Interoperable Train Control Messaging (ITCM) application. You’ll also learn about how this software provider can scale on-demand, has improved governance and cost management, and rapidly supports new projects without increasing IT overhead using AWS.
AZUG.BE - Azure User Group Belgium - First public meetingMaarten Balliauw
- What is AZUG? Who is who?
- An overview of the Azure platform
- .NET Services
- Enterprise reasons to adopt the cloud
- Getting started with Azure
- Open discussion
For our next ArcReady, we will explore a topic on everyone’s mind: Cloud computing. Several industry companies have announced cloud computing services . In October 2008 at the Professional Developers Conference, Microsoft announced the next phase of our Software + Services vision: the Azure Services Platform. The Azure Services Platforms provides a wide range of internet services that can be consumed from both on premises environments or the internet.
Session 1: Cloud Services
In our first session we will explore the current state of cloud services. We will then look at how applications should be architected for the cloud and explore a reference application deployed on Windows Azure. We will also look at the services that can be built for on premise application, using .NET Services. We will also address some of the concerns that enterprises have about cloud services, such as regulatory and compliance issues.
Session 2: The Azure Platform
In our second session we will take a slightly different look at cloud based services by exploring Live Mesh and Live Services. Live Mesh is a data synchronization client that has a rich API to build applications on. Live services are a collection of APIs that can be used to create rich applications for your customers. Live Services are based on internet standard protocols and data formats.
Astroinformatics 2014: Scientific Computing on the Cloud with Amazon Web Serv...Jamie Kinney
An overview of Amazon Web Services (AWS) and a survey of scientific computing applications of cloud computing. Examples come from the fields of Astronomy, High Energy Physics and include examples from CERN, NASA and others.
Continuous Delivery for Desktop Applications: a case study - Miguel Alho & Jo...Comunidade NetPonto
Continuous Delivery is a key enabler of fast release cycles, fast feedback and high performance. Most of what we know on how to do CD is mainly oriented towards server software, where we control the environment. For desktop applications, distributed to uncontrolled environments things can get a little tricky, and the application size can be a problem.
Enabling CD for our desktop applications has changed the way we develop software in many aspects. In this presentation, we’ll talk about how we implemented CD to distribute our applications in an incremental manner and talk about many of the discoveries we made along the way.
Criando aplicações para windows phone 8.1 e windows 8.1 com o app studio da...Comunidade NetPonto
Já tiveste ideias brilhantes para uma app? Queres colocar isso em prática? Então esta sessão é para ti!
O App Studio da Microsoft é um serviço para facilitar e acelerar o desenvolvimento de aplicações para Windows Phone e Windows 8.1. Nesta sessão, a Sara vai apresentar o App Studio, onde vai criar uma app e lhe vai adicionar diversas funcionalidades, como por exemplo: feed the notícias de um blog, canal de Youtube, feed de uma página de Facebook, entre outras funcionalidades.
E uma vez que o App Studio é extensível e porque código não podia faltar, a Sara irá apresentar um exemplo de como extender o código, adicionando o feed do Twitter.
O padrão MVVM é o padrão de desenvolvimento recomendado para aplicações Windows Phone. E para ajudar na implementação desde padrão existem vários toolkits que facilitam o processo de desenvolvimento.
Nesta sessão a Sara irá mostrar como usar o toolkit MVVM Light e o Cimbalino Windows Phone Toolkit, dois toolkits muito poderosos para a implementação do padrão MVVM. Para tal irá construir alguns exemplos durante a sessão como:
- obter a localização;
- lançar a aplicação da câmara que permitirá tirar fotografias;
- lançar a aplicação do telefone que permitirá efetuar chamadas;
- obter o identificador univoco do telefone;
- escrever texto e images no isolated storage.
Deep dive into Windows Azure Mobile Services - Ricardo CostaComunidade NetPonto
A apresentação tenta cobrir todos os serviços disponibilizados pela plataforma Azure Mobile.
Desde data-storage até server-side code, passando por push notifications e custom API.
Serão também abordados os temas de source-control, scheduler, logging e scaling.
The power of templating.... with NVelocity - Nuno CanceloComunidade NetPonto
Desde os inicio dos tempos existem padrões de desenho ainda que não tivessem um nome atribuído e com o aparecimento da Internet para o mero utilizador, tornou-se evidente para o programador a importância da utilização de padrões e separar as responsabilidades dos módulos das suas aplicações.
O padrão mais conhecido por entre as aplicações é o MVC ou criar um conjuntos de boas práticas e separar a aplicação em três componentes: O Modelo, o Controlador e a Vista. E é com este padrão que dispara o potencial dos "templates engines", ao permitir alcançar os objectivos lançados pela nossa imaginação e propósito da aplicação, como por exemplo gerar Templates para:
- páginas web
- emails
- geração de código
Na sessão vamos falar do NVelocity, um template engine com grande potencial que permite realizar o limite da nossa mente.
ASP.Net Performance – A pragmatic approach - Luis PaulinoComunidade NetPonto
Nesta sessão abordamos a performance de Sistemas de Informação desenvolvidos na plataforma ASP.NET com recurso a SQL Server com SGBD. Iremos explicar como surgem os problemas de performance em sistemas com alguns anos de existência e qual a abordagem a tomar, quando temos utilizadores insatisfeitos.
Abordaremos também alguns casos de sucesso no mercado a nível de sistemas de alta disponibilidade e como o mercado tem evoluído. De uma forma geral, pretendemos demonstrar técnicas de análise/tuning de performance em ASP.NET e sua evolução ao longo das várias versões, como também algumas técnicas de requisitos para obtenção e estruturação da informação.
Finalmente, o objetivo passa por divulgar procedimentos, técnicas e ferramentas que sirvam como uma referência que possam ser úteis caso surjam problemas de performance nos nossos sistemas de futuro, entre os quais : Do’s & Dont’s, Systematic Tuning, ASP.NET Trace, VS Profiling Tools, SQL Profiler entre outros.
Com o ASP.NET SignalR passamos a ter o poder da comunicação em real-time através de mecanismos de push. O SignalR utiliza um conjunto de tecnologias e técnicas para permitir que o servidor envie informação para um ou mais clientes. Estes clientes podem ser tão diferentes quanto um cliente HTML + Javascript, uma aplicação WPF ou mesmo uma app a ser executada no iOS.
Vamos explorar estas potencialidades em um conjunto de exemplos práticos onde poderemos perceber:
Quais as técnicas e tecnologias que suportam o SignalR;
Quão simples é criar um cliente capaz de fazer comunicação em real-time;
Quais plataformas já suportam o SignalR;
Além disso poderemos discutir quais as áreas de aplicabilidade desta tecnologia.
Nesta sessão vamos analisar as características deste serviço fazer uma breve introdução à arquitectura que a suporta. Iremos verificar as considerações que devem ser tidas em conta na criação e utilização deste tipo de armazenamento, analisando o impacto que as decisões tomadas têm no que respeita a performance e objectivos de escalabilidade.
Serão ainda mostrados alguns exemplos de utilização em cenários distintos, incluindo algumas optimizações que se podem fazer para melhorar a performance.
Comunidade NetPonto, a comunidade .NET em Portugal!
http://netponto.org
Nesta sessão é objetivo mostrar as novas funcionalidade do HTML5, bem como a integração com tecnologias existentes.
Nesta sessão vão ser abordadas as diferenças existentes entre o HTML 4 e o HTML 5, vai ser possível perceber quais são as novas funcionalidades, novos controlos, integração com tecnologias existentes (CSS e Javascript). Vamos também discutir como fazer offline, ligações ao servidor para enviar ou receber informação e como utilizar o Canvas e o SVG para desenhar em HTML.
Comunidade NetPonto, a comunidade .NET em Portugal!
http://netponto.org
Nos dias de hoje, onde trabalhamos em "internet time", com o paragdima da cloud e onde a economia nos obriga a fazer mais com menos, o "Time to Market" torna-se um fator diferenciador entre o sucesso e o falhanço de um projeto de software.
Esta sessão aborda alguns métodos e ferramentas que nos ajudam a automatizar processos de build e deployment, que podem tornar-se dolorosos e até impeditivos no avanço para a meta final de um projeto de software, de modo a que nos possamos focar nas atividades que acrescem valor ao nosso produto.
Nomeadamente, serão apresentados exemplos práticos da aplicação das tecnologias Microsoft como o MSBuild, Web Deploy, web.config transformations, web.config parametrizations e ainda, a utilização do servidor de builds Jenkins para implementar a automação de builds e deployments.
Comunidade NetPonto, a comunidade .NET em Portugal!
http://netponto.org
Apresentação do Nuno Caneco sobre Utilização de Mock Objects em Testes Unitários na 29a Reunião Presencial da Comunidade NetPonto em Lisboa (http://netponto.org).
Apresentação do João Pedro Martins sobre Dinâmica e Motivacao de Equipas de Projecto na 28a Reunião Presencial da Comunidade NetPonto em Lisboa (http://netponto.org).
Apresentação do Marco Silva sobre a utilização de KnockoutJS com ASP .NET MVC 3 na 28a Reunião Presencial da Comunidade NetPonto em Lisboa (http://netponto.org).
Como ser programador durante o dia e mesmo assim dormir bem à noiteComunidade NetPonto
Apresentação do Bruno Lopes sobre variados temas como instrumentação, profiling, logging e boas práticas de programação e desenvolvimento de software, incluindo lições tiradas do processo de desenvolvimento, manutenção e suporte à produção de várias aplicaçoes e produtos, na 2a Reunião Presencial da Comunidade NetPonto (http://netponto.org) no Porto.
Windows 8: Desenvolvimento de Metro Style Apps - C. Augusto ProieteComunidade NetPonto
Apresentação do C. Augusto Proiete sobre como podemos desenvolver aplicações Metro style para o Windows 8 e tirar partido das novas APIs introduzidas com o Windows Runtime (WinRT), na 2a Reunião Presencial da Comunidade NetPonto (http://netponto.org) no Porto.
Video desta apresentação:
http://www.youtube.com/watch?v=8-njK3WjZtY
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
4. Business Drivers for IT Projects Increase market share and revenue Investing in product development and customer facing interaction channels Increase efficiency and lower TCO Investing in technologies and processes to drive efficiency and lower cost through optimization
5. Control High Low Economy of Scale Low High Control vs. Economy of Scale
6. Build vs. Buy Control High Low Economy of Scale Low High This is not new…
7. On premises vs. in the cloud Control High Low Economy of Scale Low High This is new…
8. Leveraging Economy of Scale Ratio between $ buckets for medium and very large DC Source: Above the Clouds: A Berkeley View of Cloud Computing
9.
10. Short distance access to hydroelectricpower Long distance transmission over the grid Energy generation based on expensive resources Location Matters
11. Short distance access to hydroelectricpower Price per KWH 3.6¢ Idaho 10.0¢ California 18.8¢ Hawaii Long distance transmission over the grid Energy generation based on expensive resources Location Matters
12. Application runs on-premises Buy my own hardware, and manage my own data center Application runs at a hoster Pay someone to host my application using hardware that I specify Application runs using cloud platform Pay someone for a pool of computing resources that can be applied to a set of applications Datacenter Options
19. IaaS User gets access to a pool of resources through VM abstraction User needs to patch/maintain the system Scales only if stateless or if well partitioned E.g. Amazon EC2
20. PaaS User gets access to a pool of resources through a programming model System is managed by the cloud provider Platform (programming model) can provide scalability E.g. Windows Azure, Google App Engine
21. SaaS User gets access to applications System is completely managed by the cloud provider Cloud provider may offer customization and extensibility options E.g. Salesforce.com, Microsoft BPOS, Google Apps
24. Wall Street firm on Amazon EC2 3000-- Number of EC2 Instances 300 CPU’s on weekends 300 -- Thursday 4/23/2009 Friday 4/24/2009 Sunday 4/26/2009 Monday 4/27/2009 Tuesday 4/28/2009 Saturday 4/25/2009 Wednesday 4/22/2009
25. Scale through PaaS The platform abstracts the physical infrastructure Ability to scale built into the programming model Google’s AppEngine Microsoft’s Windows Azure Amazon’s elastic MapReduce …
26. What is Windows Azure?An Operating System for the cloud Hardware Abstraction across multiple servers Distributed Scalable, Available Storage Deployment, Monitoring and Maintenance Automated Service Management, Load Balancers, DNS Programming Environments Interoperability Designed for Utility Computing
27. Why Windows Azure? OS Takes care of your service in the cloud Deployment Availability Patching Hardware Configuration You worry about writing the service
28. What is Windows Azure?Features Automated Service Management Compute Storage Developer Experience
58. Provide rich tooling and templates to easily build your applicationCompute Storage
59. Windows Azure API All operations are accessed through RoleEnvironment RoleEnvironment Handles Retrieving configuration information LocalResource locations (local disk) Accessing Role information like InstanceEndpoints necessary for inter-role communication Exposes events during instance lifecycle, allowing a chance to respond to topology or config change
60. Service Models Describes your Service <?xmlversion="1.0" encoding="utf-8"?> <ServiceDefinitionname="CloudService1" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRolename="WebRole"> <ConfigurationSettings> <Settingname="AccountName"/> </ConfigurationSettings> <LocalStoragename="scratch" sizeInMB="50"/> <InputEndpoints> <!-- Must use port 80 for http and port 443 for https when running in the cloud --> <InputEndpointname="HttpIn" protocol="http" port="80" /> </InputEndpoints> </WebRole> <WorkerRolename="WorkerRole"> <ConfigurationSettings> <Settingname="AccountName"/> <Settingname="TableStorageEndpoint"/> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>
64. Storage Blobs, Drives, Tables, Queues Designed for the cloud 3 replicas Guaranteed consistency Accessible directly from the internet via REST API .NET Client library supported Does not require compute Storage account drives unique URL, e.g.: https://<youraccount>.blob.core.windows.net
65. Blobs Blobs stored in Containers 1 or more Containers per account Scoping is at container level …/Container/blobname $root is special name for root container Blobs Two types, Page and Block Page is random R/W, Block has block semantics Metadata, accessed independently name/value pairs (8kb total) Private or Public container access
66. Queues Simple asynchronous dispatch queue Create and delete queues Message: Retrieved at least once Max size 8kb Operations: Enqueue Dequeue RemoveMessage
67. Tables Entities and properties (rows & columns) Tables scoped by account Designed for billions+ Scale-out using partitions Partition key & row key Operations performed on partitions Efficient queries No limit on number of partitions Use ADO.NET Data Services
69. Service Lifecycle Create service package Binaries + Content + Service Metadata Deploy via web portal or Service Management API Add & remove capacity via web portal or API Deployed across fault domains Upgrade with zero downtime
70. Automated Service Management You tell us what, we take care of how What Service metadata How Metadata describes service No OS footprint Service is copied to instances Instances were copied to physical hardware Physical hardware booted from VHD All patching is performed offline
71. Service Monitoring SDK component providing distributed monitoring & data collection for cloud apps Support Standard Diagnostics APIs Trace, Debug normally Manage multiple role instances centrally Scalable Choose what to collect & when to collect it Event Logs, Trace/Debug, Performance Counters, IIS Logs, Crash Dumps, Arbitrary log files
72. Design Considerations Scale and availability are the design points Storage isn’t a relational database Stateless Stateless front ends, store state in storage Use queues to decouple components Instrument your application (Trace) Once you are on - stay on Think about patching & updates
73. The CxO Value Proposition Time to Market No upfront CAPEX Instant access to compute resources Total Cost of Ownership CAPEX/OPEX Economy of scale Elasticity Productivity Higher level of abstraction for compute resources Risk Mitigation Allows for trial and error Unpredictable utilization patterns
74. Cloud Pricing Pricing Options Pay as you go – flexible but unpredictable Flat fees, Quotas, Limits allow for better control Decisions based on business model, predictability, IT budgeting Pricing Dimensions Compute pricing # CPUs (hours deployed vs. hours utilized) Dedicated local storage & memory Bandwidth (inbound, outbound, inter-DC) Can become very pricy for large storages Disk shipments Storage Average used storage capacity vs. , “fixe sized” storage Storage transactions (read, write) Used query time
75. The Bandwidth Challenge Storage vs. Bandwidth Time and cost of transferring data AWS Import/Export Compute and storage location matters Compute where the data is stored instead of moving the data to the compute resource
76. The Regulation Challenge Safe Harbor Act Datacenter locations matter Patriot Act Big Brother might get access to your data Industry Regulations E.g. Payment Card Industry Data Security Standard (PCI DSS)
77. Where to Start? Find a business case first! Compute Scenarios Capabilities with little integration requirements Input compute output High scalability/performance requirements Challenging utilization requirements Storage Scenarios Archiving solutions Cross-corporate data sharing Look for capabilities with infrequent compute
80. Próximas reuniões presenciais 19/06/2010 - Junho 10/07/2010 - Julho 14/08/2010 - Agosto 18/09/2010 - SetembroReserva estes dias na agenda! :)
81. Obrigado! Pedro Rosa http://pt.linkedin.com/in/pedrobarraurosa http://twitter.com/pedrorosa
Editor's Notes
You can draw the comparison between the desktop/server OS and the cloud OS. The desktop abstracts away things like printers, display drivers, memory etc. So you only have to worry about building your desktop application. The Cloud OS does this for the cloud, but instead of printers, display drivers etc. it does it across multiple servers, networking compoents, provides a “cloud file system” for storage, a programming environment etc.The last 2 points:1. Interoperability – the storage etc uses REST based protocols. Additionally, we support things like PHP, MySQL, Apache, etc. with the release of inter-role communication.2. Designed for Utility Computing – Rather than charging a per-seat license etc. you will be charged by consumption. The pricing is available on the windowsazure.com website.
Windows Azure is not about letting you setup and run an entire OS with your application.Instead it is about running your service, using commodity servers that are managed by Microsoft. Microsoft take care of deploying your service, patching the OS, keeping your service running, configuring hardware, infrastructure etc. All of this is automated.All you need to worry about is writing the service.
Here are some of the features we’ll walk through in the next few minutes.
This is the exploding cloud diagram
Windows Azure runs on Windows Server 2008 running .NET 3.5 SP1. At MIX09, we opened up support for Full Trust and FastCGI. Full Trust is starred here because while Full Trust gives you access to p/invoke into native code, it is code that still runs in user mode (not administrator). However, for most native code that is just fine. If you wanted to call into some Win32 APIs for instance, it might not work in all instances because we are not running your code under a system administrator account.There are 2 roles in playA web role – which is just a web site, asp.net, wcf, images, css etc.A worker role – which is similar to a windows service, it runs in the background and can be used to decouple processing. There is a diagram later that shows the architecture, so don’t worry about how it fits together just yet.Key to point out the inbound protocols are HTTP & HTTPS and now TCP ports – outbound are any TCP Socket, (but not UDP).All servers are stateless, and all public access is through load balancers. You can use inter-role communication to communicate point to point between roles (non-public, non-load balanced).
This should give a short introduction to storage. Key points are its durable (meaning once you write something we write it to disk), scalable (you have multiple servers with your data), available (the same as compute, we make sure the storage service is always running – there are 3 instances of your data at all times).Quickly work through the different types of storage:Blobs – similar to the file system, use it to store content that changes, uploads, unstructured data, images, movies etc.Drives – an NTFS mountable drive backed over blob storage. This enables interop and traditional file system capabilities over blob storage backingTables – Semi-structured, provides a partitioned entity store (more on partitions etc. in the Building Azure Services Talk) – allows you to have tables containing billions of rows, partitioned across multiple servers.Queues – Simple queue for decoupling Computer Web and Worker Roles.All access is through REST interface. You can actually access the storage from outside of the data center (you don’t need compute) and you can access storage via anything that can make a HTTP request.It also means table storage can be accesses via ADO.NET Data Services.
Remind them the cloud is all the hardware across the board.Point out the automated service management,
Developer SDK is a Cloud in a box, allowing you to develop and debug locally without requiring a connection to the cloud. You can do this without Visual Studio as there are command line tools for executing the “cloud in a box” and publishing to the cloud.There is also a separate download for the Visual Studio 2008 tools, which provide the VS debugging and templates.Requirements are any version of Visual Studio (including Web Developer Express), Vista SP1, Win7 RC or later.
There is a small API for the cloud, that allows you to do some simple things, such as logging, reading from a service configuration file, and local file system access. The API is small and is easy to learn.
To allow us to deploy and operate your service in the cloud, we need to know the structure of your service. You describe your service and operating parameters through the use of a service model. This service model tells us which roles you have, any service configuration and can also describe the number of instances you need for each role within your service. Whilst this model is simple today, the model will be extended to allow you to describe a much richer operational model – e.g. allowing scale-out and scale-down based upon consumption and performance.This file is also where you would store configuration that may change once deployed. Since all files within a role are read-only, you cannot change either an app.config or web.config file once deployed, the only configuration you can change is in the service model.
Key points here are that all external connections come through a load balancer. If you are familiar with the previous model, you will notice that two new features are diagrammed here as well, namely inter-role communication (notice there is no loadbalancer) and TCP ports directly to Worker Roles (or Web Roles). We will still use the storage to communicate async and realiably via queues for a lot of options. However, inter-role communication fills in when you need direct synchronous comm.
Key points here are that all external connections come through a load balancer. If you are familiar with the previous model, you will notice that two new features are diagrammed here as well, namely inter-role communication (notice there is no loadbalancer) and TCP ports directly to Worker Roles (or Web Roles). We will still use the storage to communicate async and realiably via queues for a lot of options. However, inter-role communication fills in when you need direct synchronous comm.
In this next section, we’ll dig a little deeper on storage.Recall there are 3 types of storage.Recall the design point is for the cloud, there are 3 replicas of data, and we implement guaranteed consistency. In the future there will be some transaction support and this is why we use guaranteed consistency.Access is via a storage account – you can have multiple storage accounts per project.Although the API is REST, there is a supported .net storage client in the SDK that you can use within your project. This makes working with storage much easier.
BlobsBlobs are stored in containers. There are 0 or more blobs per container and 0 or more containers per account. (since you can have 0 containers, but then you would not have any blobs either)Typically url in the cloud is http://accountname.blob.core.windows.net/container/blobpathBlob paths can contain the / character, so you can give the illusion of multiple folders, but there is only 1 level of containers.Blob capacity at CTP is 50gb.There is an 8k dictionary that can be associated with blobs for metadata.Blobs can be private or public:Private requires a key to read and writePublic requires a key to write, but NO KEY to read.Use blobs where you would use the file system in the past.
Queues are simple:Messages are placed in queues. Max size is 8k (and it’s a string)Message can be read from the queue, at which point it is hidden.Once whatever read the message from the queue is finished processing the message, it should then remove the message from the queue. If not the message is returned to the queue after a specific user defined time limit. This can be used to handle code failures etc.
Tables are simply collections of Entities.Entites must have a PartitionKey and RowKey – can also contain up to 256 other properties.Entities within a table need not be the same shape! E.g.:Entity 1: PartitionKey, RowKey, firstnameEntity 2: PartitionKey, RowKey, firstname, lastnameEntity 3: PartitionKey, Rowkey, orderId, orderData, zipCodePartitions are used to spread data across multiple servers. This happens automatically based on the partition key you provide. Table “heat” is also monitored and data may be moved to different storage endpoints based upon usage.Queries should be targeted at a partition, since there are no indexes to speed up performance. Indexes may be added at a later date.Its important to convey that whilst you could copy tables in from a local data source (e.g. sql) it would not perform well in the cloud, data access needs to be re-thought at this level. Those wanting a more traditional SQL like experience should investigate SDS.
Once you have built and tested your service, you will want to deploy it.The key to deployment and operations is the service model.To deploy – first you build your service, this takes the project output + Content (images, css etc.) and makes a single file. It also creates and instance of your service metadata.Next you would visit the web portal and upload the 2 solution files – from there the “cloud” takes care of deploying it onto the correct number of machines and getting it to run.To increase and decrease capacity today, you would edit the configuration from the web portal.For more than 1 instance, you should be deployed across fault domains, meaning separate hardware racks.In the portal you have a production and staging area, with different urls. You can upload the next version of your project into staging, then flip the switch – which essentially changes the load balancers to point to the new version.
So how do we do the automated deployment & manage your service.1st – remember the service metadata tells us exactly what we need to deploy and how many instances etc.There is no OS footprint, so your service can be copied around the data center without any configuration requirements.The OS itself is on a VHD, so it was copied to the hardware.The hardware itself was also booted from VHD, which was also copied around.Therefore, to put a new version of your software, or the OS that hosts it, all we need to do is copy it to a new machine and spin it up. It also means we can patch and test the OS offline. No live patching!!!
Now your service is deployed, how do YOU monitor it?With the diagnostics and monitoring API, you can deploy your roles and remotely configure what sources your instance should monitor. This configuration can be by role or by instance. You can configure standard tracing in your application, monitor the event logs or performance counters, collect log files like IIS logs or any log file as well as crash dumps of your application. Since this information can be pushed into your storage account on demand or on a scheduled basis, it is both highly scalable as well as easily manageable from outside of Windows Azure.
Some key things to rememberDesign points are scalability and availability – think it terms of lots of small servers rather than a single BIG server.Table storage is semi-structured – ITS NOT A RELATIONAL DATABASE – IT NEVER WILL BE. THAT IS SDS.Everything is stateless (you can maintain state in table or blob storage if YOU want to)Decouple everything using queues, and write code to be repeatable without breaking anything – in other words design for failure!Instrument and log your application yourself.Work on the idea that once you are on – stay on.How will you patch/update your service once it is switched on?