The document summarizes the development of a scratch card game for LINE NOW by a team in one month. Key points:
- The team developed a scratch card reward game for an upcoming Moon Festival campaign in LINE NOW within one month using agile methods.
- They created an algorithm to randomly generate scratch cards that ensured prizes would be distributed evenly over time to avoid issues of running out of prizes or complaints from users.
- The architecture included LIFF and microservices with Redis to store card data, Kafka for events, and deployment to Kubernetes.
- Challenges included performance testing of the card generation and redemption APIs, scaling to handle campaign traffic peaks, and recovering when Kafka went down on the last
ADF Basics and Beyond - Alfresco Devcon 2018Mario Romano
If you want to know everything about ADF its architecture, technologies and best practices you can't skip this talk. Join us also to know more about what we released in November as part of ADF 2.0 and what is our vision for the future.
Provision to Production with Terraform EnterpriseAmanda MacLeod
In this episode we'll show you how to enforce your AWS tagging standards with Sentinel, restrict which instance types can be run, and centralize your Terraform state management for maximum efficiency and cost savings.
A presentation on the Netflix Cloud Architecture and NetflixOSS open source. For the All Things Open 2015 conference in Raleigh 2015/10/19. #ATO2015 #NetflixOSS
ADF Basics and Beyond - Alfresco Devcon 2018Mario Romano
If you want to know everything about ADF its architecture, technologies and best practices you can't skip this talk. Join us also to know more about what we released in November as part of ADF 2.0 and what is our vision for the future.
Provision to Production with Terraform EnterpriseAmanda MacLeod
In this episode we'll show you how to enforce your AWS tagging standards with Sentinel, restrict which instance types can be run, and centralize your Terraform state management for maximum efficiency and cost savings.
A presentation on the Netflix Cloud Architecture and NetflixOSS open source. For the All Things Open 2015 conference in Raleigh 2015/10/19. #ATO2015 #NetflixOSS
Apache Airflow (incubating) NL HUG Meetup 2016-07-19Bolke de Bruin
Introduction to Apache Airflow (Incubating), best practices and roadmap. Airflow is a platform to programmatically author, schedule and monitor workflows.
Promise of a better future by Rahul Goma Phulore and Pooja Akshantal, Thought...Thoughtworks
With the recent, vivid trend towards multicore hardware and the ever growing application requirements, concurrency is no more a niche area it used to be, and is slowly becoming a norm. In this talk, we will talk about promises/futures, one of the concurrency models that has risen to the occasion. We will look at what they are, how they're implemented and used in Java and Javascript. We will see how Scala, with its functional paradigm and greater abstraction capabilities, avoids "callback hell" typically associated with the model, allows writing of concurrent code in "direct style", and thereby greatly reduces the cognitive burden, allowing you to focus on application logic better.
The data science team at Zymergen is applying machine learning techniques to identify genetic targets, work that is supported by extensive analytical automation that systematically identifies outliers, removes process-related bias, and quantifies performance improvements. We’re using Apache Airflow to construct robust data pipelines that allow us to produce clean, reliable inputs to our predictive models. In this talk, I’ll discuss the unique data processing challenges we face in working with high-throughput, biological data and provide an overview of how we’re using Apache Airflow to meet those challenges.
Building a Machine Learning App with AWS LambdaSri Ambati
Ludi Rehaks' meetup on 03.17.16
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
AWS Lambda and Serverless framework: lessons learned while building a serverl...Luciano Mammino
Planet9energy.com is a new electricity company that is building a sophisticated analytics and energy trading platform for the UK market. Since the earliest days of the company we took the unconventional decision to go serverless and finally we are building the product on top of AWS Lambda and the Serverless framework using Node.js. In this talk we will discuss why we took this radical decision, what are the pros and cons of this approach and what are the main issues we faced as a tech team in our design and development experience. We will discuss how normal things like testing and deployment need to be re-thought to work on a serverless fashion but also the benefits of (almost) infinite auto-scalability and the piece of mind of not having to manage hundreds of servers. Finally we will underline how Node.js seems to fit naturally in this scenario and how it makes developing serverless applications extremely convenient.
Thanks to Padraig O'Brien and Luciano Mammino for speaking this month.
Speakers Bio:
Padraig O'Brien
Podge @Podgeypoos79 is a software engineer for over 15 years, most of that was spent developing in .NET and SQL Server, designing and building large scale data intensive applications. Lately he has shifted towards open source technologies and is spending most of his time learning Node.js, Scala and cool data tech like Spark, Cassandra. He is also working on a “super-secret” project called UnicornDB, don’t tell anybody!
In his spare time he helps out with organising some meetups like NodeSchool Dublin, NodeSchool Dun Laoghaire and teaching Kanban via Agile Lean Ireland.
Luciano Mammino
Luciano @loige is a Software Engineer born in 1987, the same year that the Nintendo released “Super Mario Bros” in Europe, which, “by chance” is his favourite game! His primary passion is code and he is extremely fascinated by the web, smart apps and everything that's creative like music, art and design. He started coding at the age of 12 using his father's old i386 provided only with DOS and the qBasic interpreter.He is a senior software developer at Planet9Energy in Dublin and he loves JavaScript (React/Node.js). He is also the co-author of "Node.js design patterns" 2nd edition (Packt, http://amzn.to/1ZF279B).
Hosted by Intercom, sponsored by Nearform and organised by Node.js Dublin (https://www.meetup.com/Dublin-Node-js-Meetup/events/236870576/)
StreamSQL Feature Store (Apache Pulsar Summit)Simba Khadder
Input features are the building blocks for machine learning models. You cannot have a great model without great features. By building on top of Apache Pulsar's infinite retention of events, we built infrastructure to serve features in production and to generate training datasets. It allowed our machine learning teams to change, test, and deploy personalization features at an extraordinary rate to 10s of millions of end-users.
This talk will discuss:
- What event-sourcing is and why it's so powerful for machine learning infrastructure.
- How we built the StreamSQL feature store on top of Pulsar, Flink, and Cassandra.
- How a feature store accelerates ML development.
Architecting for the Cloud using NetflixOSS - Codemash WorkshopSudhir Tonse
Cloud development is inherently different than data center development. Understanding those differences, and architecting for them is critical to successful cloud solutions. In this workshop, we will both describe Netflix OSS platform components and show you how you can piece them together to build your own fault-tolerant REST services. These include: Hystrix, Ribbon, Eureka, and Archaius. In this hands-on lab, you will both learn the benefits of each of these services and use them in a sample application (in a test account). If you want to get things running in your own account, you may want to attend the afternoon session (Setting up your environment for the AWS cloud).
Building serverless app_using_aws_lambda_b4usolutionHoa Le
Scaling applications is a big problem whether your applications are deployed on-premises or on the cloud, that means you have to provision and manage servers such as how much CPU, storage and database power that applications need.
The Apache Way - Building Open Source Community in China - Luke HanLuke Han
My presentation at ApacheCon 2016 NA, talking about our practices to build open source community (Apache Kylin) in China, about the challenge, the culture different, the language and so on.
Also have a overview about Open Source in China, about the changing happening now there.
It's good reference for people have interesting to extend their community in China, to engage more Chinese even Asia developers to double their open source community and adoption.
(This presentation was presented in Serverless Summit.)
Serverless platform can be a very good fit for event driven applications. In this session, we will explore what are event driven applications, their architecture and how serverless platform can be leveraged for creating such applications. We will also explore what are best practices when developing such applications, touching upon areas like security, code portability, modularizing code and relevant patterns, and data proximity issues. This will be followed up by a Demo of event driven Application deployed on serverless platform.
Running Airflow Workflows as ETL Processes on Hadoopclairvoyantllc
While working with Hadoop, you'll eventually encounter the need to schedule and run workflows to perform various operations like ingesting data or performing ETL. There are a number of tools available to assist you with this type of requirement and one such tool that we at Clairvoyant have been looking to use is Apache Airflow. Apache Airflow is an Apache Incubator project that allows you to programmatically create workflows through a python script. This provides a flexible and effective way to design your workflows with little code and setup. In this talk, we will discuss Apache Airflow and how we at Clairvoyant have utilized it for ETL pipelines on Hadoop.
Akka and AngularJS – Reactive Applications in PracticeRoland Kuhn
Imagine how you are setting out to implement that awesome idea for a new application. In the back-end you enjoy the horizontal and vertical scalability offered by the Actor model, and its great support for building resilient systems through distribution and supervision hierarchies. In the front-end you love the declarative way of writing rich and interactive web apps that AngularJS gives you. In this presentation we bring these two together, demonstrating how little effort is needed to obtain a responsive user experience with fully consistent and persistent data storage on the server side.
See also http://summercamp.trivento.nl/
AWS re:Invent 2016: Content and Data Platforms at Vevo: Rebuilding and Scalin...AwsReinventSlides
Vevo has undergone a complete strategic and technical reboot, driven not only by product, but also by engineering. Since November 2015, Vevo has been replacing monolithic, legacy content services with a modern, modular, microservices architecture, all while developing new features and functionality. In parallel, Vevo has built its data platform from scratch to power internal analytics as well as a unique music video consumption experience through a new personalized feed of recommendations — all in less than one year.
This has been a monumental effort that was made possible in this short time span largely because of AWS technologies. The content team has been heavily using serverless architectures and AWS Lambda in the form of microservices, taking a similar approach to functional programming, which has helped us speed up the development process and time to market. The data team has been building the data platform by heavily leveraging Amazon Kinesis for data exchange across services, Amazon Aurora for consumer-facing services, Apache Spark on Amazon EMR for ETL + Machine Learning, as well as Amazon Redshift as the core analytics data store.
In this session, Miguel and Alan walk you through Vevo's journey, describing best practices and learnings that the Vevo team has picked up along the way.
This talk evalutes some easy ways to extract useful trending and capacity planning out of your existing monitoring investment. Using Nagios performance data, we examine simple behaviors with PNP4Nagios and graduate on to more insightful analytics with Graphite. With metrics in hand we look at the questions that IT /should/ be asking, such as:
* What sort of data should I trend?
* Why do I need to trend it?
* How do Operational or Engineering trends relate to Business or Transactional monitoring?
* How does this data impact our customer relationship and/or their bottom-line?
Finally, we look at creative ways to get profiling data out of your production systems with a minimum amount of effort from your development team.
MySQL performance monitoring using Statsd and Graphite (PLUK2013)spil-engineering
MySQL performance monitoring using Statsd and Graphite (PLUK2013)
Note: this is a placeholder for the presentation next Tuesday at the Percona Live London
Apache Airflow (incubating) NL HUG Meetup 2016-07-19Bolke de Bruin
Introduction to Apache Airflow (Incubating), best practices and roadmap. Airflow is a platform to programmatically author, schedule and monitor workflows.
Promise of a better future by Rahul Goma Phulore and Pooja Akshantal, Thought...Thoughtworks
With the recent, vivid trend towards multicore hardware and the ever growing application requirements, concurrency is no more a niche area it used to be, and is slowly becoming a norm. In this talk, we will talk about promises/futures, one of the concurrency models that has risen to the occasion. We will look at what they are, how they're implemented and used in Java and Javascript. We will see how Scala, with its functional paradigm and greater abstraction capabilities, avoids "callback hell" typically associated with the model, allows writing of concurrent code in "direct style", and thereby greatly reduces the cognitive burden, allowing you to focus on application logic better.
The data science team at Zymergen is applying machine learning techniques to identify genetic targets, work that is supported by extensive analytical automation that systematically identifies outliers, removes process-related bias, and quantifies performance improvements. We’re using Apache Airflow to construct robust data pipelines that allow us to produce clean, reliable inputs to our predictive models. In this talk, I’ll discuss the unique data processing challenges we face in working with high-throughput, biological data and provide an overview of how we’re using Apache Airflow to meet those challenges.
Building a Machine Learning App with AWS LambdaSri Ambati
Ludi Rehaks' meetup on 03.17.16
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
AWS Lambda and Serverless framework: lessons learned while building a serverl...Luciano Mammino
Planet9energy.com is a new electricity company that is building a sophisticated analytics and energy trading platform for the UK market. Since the earliest days of the company we took the unconventional decision to go serverless and finally we are building the product on top of AWS Lambda and the Serverless framework using Node.js. In this talk we will discuss why we took this radical decision, what are the pros and cons of this approach and what are the main issues we faced as a tech team in our design and development experience. We will discuss how normal things like testing and deployment need to be re-thought to work on a serverless fashion but also the benefits of (almost) infinite auto-scalability and the piece of mind of not having to manage hundreds of servers. Finally we will underline how Node.js seems to fit naturally in this scenario and how it makes developing serverless applications extremely convenient.
Thanks to Padraig O'Brien and Luciano Mammino for speaking this month.
Speakers Bio:
Padraig O'Brien
Podge @Podgeypoos79 is a software engineer for over 15 years, most of that was spent developing in .NET and SQL Server, designing and building large scale data intensive applications. Lately he has shifted towards open source technologies and is spending most of his time learning Node.js, Scala and cool data tech like Spark, Cassandra. He is also working on a “super-secret” project called UnicornDB, don’t tell anybody!
In his spare time he helps out with organising some meetups like NodeSchool Dublin, NodeSchool Dun Laoghaire and teaching Kanban via Agile Lean Ireland.
Luciano Mammino
Luciano @loige is a Software Engineer born in 1987, the same year that the Nintendo released “Super Mario Bros” in Europe, which, “by chance” is his favourite game! His primary passion is code and he is extremely fascinated by the web, smart apps and everything that's creative like music, art and design. He started coding at the age of 12 using his father's old i386 provided only with DOS and the qBasic interpreter.He is a senior software developer at Planet9Energy in Dublin and he loves JavaScript (React/Node.js). He is also the co-author of "Node.js design patterns" 2nd edition (Packt, http://amzn.to/1ZF279B).
Hosted by Intercom, sponsored by Nearform and organised by Node.js Dublin (https://www.meetup.com/Dublin-Node-js-Meetup/events/236870576/)
StreamSQL Feature Store (Apache Pulsar Summit)Simba Khadder
Input features are the building blocks for machine learning models. You cannot have a great model without great features. By building on top of Apache Pulsar's infinite retention of events, we built infrastructure to serve features in production and to generate training datasets. It allowed our machine learning teams to change, test, and deploy personalization features at an extraordinary rate to 10s of millions of end-users.
This talk will discuss:
- What event-sourcing is and why it's so powerful for machine learning infrastructure.
- How we built the StreamSQL feature store on top of Pulsar, Flink, and Cassandra.
- How a feature store accelerates ML development.
Architecting for the Cloud using NetflixOSS - Codemash WorkshopSudhir Tonse
Cloud development is inherently different than data center development. Understanding those differences, and architecting for them is critical to successful cloud solutions. In this workshop, we will both describe Netflix OSS platform components and show you how you can piece them together to build your own fault-tolerant REST services. These include: Hystrix, Ribbon, Eureka, and Archaius. In this hands-on lab, you will both learn the benefits of each of these services and use them in a sample application (in a test account). If you want to get things running in your own account, you may want to attend the afternoon session (Setting up your environment for the AWS cloud).
Building serverless app_using_aws_lambda_b4usolutionHoa Le
Scaling applications is a big problem whether your applications are deployed on-premises or on the cloud, that means you have to provision and manage servers such as how much CPU, storage and database power that applications need.
The Apache Way - Building Open Source Community in China - Luke HanLuke Han
My presentation at ApacheCon 2016 NA, talking about our practices to build open source community (Apache Kylin) in China, about the challenge, the culture different, the language and so on.
Also have a overview about Open Source in China, about the changing happening now there.
It's good reference for people have interesting to extend their community in China, to engage more Chinese even Asia developers to double their open source community and adoption.
(This presentation was presented in Serverless Summit.)
Serverless platform can be a very good fit for event driven applications. In this session, we will explore what are event driven applications, their architecture and how serverless platform can be leveraged for creating such applications. We will also explore what are best practices when developing such applications, touching upon areas like security, code portability, modularizing code and relevant patterns, and data proximity issues. This will be followed up by a Demo of event driven Application deployed on serverless platform.
Running Airflow Workflows as ETL Processes on Hadoopclairvoyantllc
While working with Hadoop, you'll eventually encounter the need to schedule and run workflows to perform various operations like ingesting data or performing ETL. There are a number of tools available to assist you with this type of requirement and one such tool that we at Clairvoyant have been looking to use is Apache Airflow. Apache Airflow is an Apache Incubator project that allows you to programmatically create workflows through a python script. This provides a flexible and effective way to design your workflows with little code and setup. In this talk, we will discuss Apache Airflow and how we at Clairvoyant have utilized it for ETL pipelines on Hadoop.
Akka and AngularJS – Reactive Applications in PracticeRoland Kuhn
Imagine how you are setting out to implement that awesome idea for a new application. In the back-end you enjoy the horizontal and vertical scalability offered by the Actor model, and its great support for building resilient systems through distribution and supervision hierarchies. In the front-end you love the declarative way of writing rich and interactive web apps that AngularJS gives you. In this presentation we bring these two together, demonstrating how little effort is needed to obtain a responsive user experience with fully consistent and persistent data storage on the server side.
See also http://summercamp.trivento.nl/
AWS re:Invent 2016: Content and Data Platforms at Vevo: Rebuilding and Scalin...AwsReinventSlides
Vevo has undergone a complete strategic and technical reboot, driven not only by product, but also by engineering. Since November 2015, Vevo has been replacing monolithic, legacy content services with a modern, modular, microservices architecture, all while developing new features and functionality. In parallel, Vevo has built its data platform from scratch to power internal analytics as well as a unique music video consumption experience through a new personalized feed of recommendations — all in less than one year.
This has been a monumental effort that was made possible in this short time span largely because of AWS technologies. The content team has been heavily using serverless architectures and AWS Lambda in the form of microservices, taking a similar approach to functional programming, which has helped us speed up the development process and time to market. The data team has been building the data platform by heavily leveraging Amazon Kinesis for data exchange across services, Amazon Aurora for consumer-facing services, Apache Spark on Amazon EMR for ETL + Machine Learning, as well as Amazon Redshift as the core analytics data store.
In this session, Miguel and Alan walk you through Vevo's journey, describing best practices and learnings that the Vevo team has picked up along the way.
This talk evalutes some easy ways to extract useful trending and capacity planning out of your existing monitoring investment. Using Nagios performance data, we examine simple behaviors with PNP4Nagios and graduate on to more insightful analytics with Graphite. With metrics in hand we look at the questions that IT /should/ be asking, such as:
* What sort of data should I trend?
* Why do I need to trend it?
* How do Operational or Engineering trends relate to Business or Transactional monitoring?
* How does this data impact our customer relationship and/or their bottom-line?
Finally, we look at creative ways to get profiling data out of your production systems with a minimum amount of effort from your development team.
MySQL performance monitoring using Statsd and Graphite (PLUK2013)spil-engineering
MySQL performance monitoring using Statsd and Graphite (PLUK2013)
Note: this is a placeholder for the presentation next Tuesday at the Percona Live London
Memorial Sloan Kettering: Adventures in Drupal 8Phase2
Memorial Sloan Kettering is preparing to launch two websites in Drupal 8. As one of the first organizations to migrate its Drupal 6 content management system onto an enterprise Drupal 8 platform, Memorial Sloan Kettering has learned first hand the major challenges and advantages of building in Drupal 8.
In this session, project members from MSK, Phase2, and Digitas will explore the decision to take the leap to Drupal 8 and the reality of building in D8 while it is still a beta. Get details on the brute force migration process, front-end integrations and wiring up with twig in practice, and community contributions to accelerate Drupal 8 in the process of a flagship redesign for one of the leaders in the healthcare space.
We’ll elaborate on the challenges we faced and strategies we used to build on Drupal 8 and how you can learn from them!
Finally, we’ll answer some of your most burning questions:
How did you accomplish moving an existing Drupal 6 site with 25,000 plus pages of content to Drupal 8 while redesigning at the same time?
Should other organizations consider building in Drupal 8?
What tools and best practices were used by developers/sys admins?
What contrib modules are being used?
How difficult was it for the team to learn Drupal 8?
What is being used for layout and webforms?What external libraries and APIs are being used?
An overview of Apple's game development technologies, followed up by tips and techniques for using UIKit for game development. The later third of the talk is an overview of games I've worked on in UIKit.
Sprint tools - Using a Mallet when you need a MjölnirChris Urban
You’re tracking your Drupal development project with one or more spreadsheets. You know there’s a better way, and you’ve heard of other tools, but just have not had time to do some digging. This session serves to help you get started: you will get a quick overview of several popular tools and alternatives that are available to help you better manage your next Drupal project. Depending on the size and complexity of your project, you’ll walk away with a shorter list of tools to try, or maybe new options to explore as alternatives.
We’ll highlight the key benefits and drawbacks of using:
Asana
IceScrum
JIRA
Libreboard
Restyaboard
Taigi.io
Trello
Are there others you’ve tried and either hated or loved? Bring that experience to our session and we can all share!
Igniting the Spark: Building Online Services for Borderlands 2Jimmy Sieben
Gearbox built an online services platform named Spark for Borderlands 2. As this was an entirely new effort for Gearbox, we learned that building a service is quite different from building a game. Along the way, we shipped two beta releases in the original Borderlands to help us succeed. The genesis of Spark, key milestones and challenges in its creation, and post-mortem from launch are discussed. This talk will help others understand why we created Spark and also what it takes to launch an online service.
Venkatesh Ramanathan, Data Scientist, PayPal at MLconf ATL 2017MLconf
Large Scale Graph Processing & Machine Learning Algorithms for Payment Fraud Prevention:
PayPal is at the forefront of applying large scale graph processing and machine learning algorithms to keep fraudsters at bay. In this talk, I’ll present how advanced graph processing and machine learning algorithms such as Deep Learning and Gradient Boosting are applied at PayPal for fraud prevention. I’ll elaborate on specific challenges in applying large scale graph processing & machine technique to payment fraud prevention. I’ll explain how we employ sophisticated machine learning tools – open source and in-house developed.
I will also present results from experiments conducted on a very large graph data set containing millions of edges and vertices.
Machine learning applications are typically stitched together from hopes and dreams, shell scripts, cron jobs, home-grown schedulers, snippets of configuration clipped from multiple blog posts, thousands of hard-coded business rules, a.k.a. "our SQL corpus," and a few lines of training and testing code. Organizing all the moving parts into something maintainable and supportive of ongoing development is a challenge most teams have on their TODO list, roadmap, or tech debt pile. Getting ahead of the day-to-day demands and settling into a sane architecture often seems like an unattainable goal. The past several years have seen an explosion of tool-building in the data engineering and analytics area, including in Apache projects spanning the areas of search and information retrieval, job orchestration, file and stream formats, and machine learning libraries. In this talk we will cover our product and development teams' choices of architecture and tools, from data ingestion and storage, through transformations and processing, to presentation of results and publishing to web services, reports, and applications.
The new stack for SharePoint Framework
Intro to Software lifecycle + devops
Intro to VSTS/Azure
The build system + deploy
Unit tests with SPFX
Intro to tech debt management
Conclusion
Bulat Lutfullin (Provectus): ML for content generation. How to generate a lev...Provectus
1. Pure random is still used in gamedev, while ML is able to generate tons of content
2. What is a genetic algorithm? How can it be applied for the level generation?
3. Are there other machine learning methods for content generation?
4. Do we really need ML in gamedev?
Similar to LINE NOW Scratch Card - From Nothing to Production in one month (20)
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
3. About me
• Julian Shen
• Backend dev @ LINE Taiwan
• Dev lead of LINE NOW Project
4. QA
Julian Hsueh-
Min
Yu-mei Alex
Denny Shiang-Yu Deann
Johnny Bryan Kay Ting-yu
Team
• 4 Back-ends, 3 Front-ends, 4 QA
• Love to use new stuffs
• Java/node.js/Scala/Kotlin/Go
• Spring/Finagle
• Docker/K8S/Rancher
• Mongo/Redis/Mysql/Kafka
• Vue.js
• SOA/Micro services
• Willing to change and take challenges
• Running as a scrum team
• What‘s LINE NOW?
• Beacon/Location & Events
• We’re hiring
5. It’s a story about moonlight
?
• We planned to develop a new reward
type - [Scratch Card] for LINE NOW
• We want this to be “different”
• There was a campaign for Moon Festival
also like to use Scratch Card as reward
• Delayed due to design
• Need to complete it in one month
7. What exactly LINE NOW Scratch Card
is?
• A game to dispatch
rewards (scratch & get)
• LINE Points
• Coupon
• Stickers
• A tool for campaign
• Build with LIFF
(https://developers.line.me
/en/docs/liff/overview/)
8. What we try to solve?
• Problems with traditional “lucky draw” game design
• “Probability” means anything is possible
• Number of prizes is fixed but probability is not
• Prizes can be consumed too quick or too slow
• Need monitoring and adding prizes anytime
• User complaints
• User should get the prize but did not received
• User shouldn’t get a prize but he claims he should
9. Traditional Lucky Draw games
Prize 1 (0.1%) Prize 3 (10%)Prize 2 (5%)
Prize 4 (15%) Prize 6 (20%) Prize 7 (30%)
End user
Pick one based on probability
Other (No
prize)
In fact
Prize 1 (1) Prize 2 (50)
Prize 3 (100) Prize 4 (200)
Prize 6 (400)
Prize 7
(1000)
Prize count
10. Card Generation Algorithm (Bryan’s
Algorithm)
Card
Generator
publish
Card
Card
Card
Card
Card
Card
Card
Card
Card
Card
Card pool (Redis)
Draw card api
generate
• From observation of real
world scratch card
• Card generator also
generate the rule for draw
(to make sure prize
dispatching is even)
• Tile permutation is
rendered at generation
time
13. Challenges in front-end development
Scratch traces
• Screen shots are logged several
times during user play
• To reduce the traffic
• Last result is saved at local
storage
• Only send “diff”
• Performance
• Calculating diff might block
main thread
• User experience
• How to make user feel like
scratching a real card
14. Challenges in back-end development
• Identify possible peaks (Moon Festival campaign)
• Marketing events and press exposure might generate boost traffic
• Banner promotion might generate boost traffic
• There is a special promotion period (buy 1 sticker get 2 scratch cards)
• Performance concerns
• Use REDIS as the main data source during campaign
• Data is written asynchronously
• Traffic is not easy to calculate precisely always
• API should be able to do horizontal scaling
• REDIS should be able to store millions of cards
• Monitoring is necessary (but we haven’t have time)
• Failure prevention
• REDIS could be the SPOF
• Data should be able to reconstruct from somewhere else (Kafka)
15. Testing
• Focus on what we most concern
• Card generation
• Check data in REDIS to make sure each card generated by generator should
follow the rule
• Traffic
• Load testing Scratch Card API
• K6 (https://k6.io/)
• Storage size
• Calculate data size for each item in REDIS
• Generate millions cards to REDIS to verify
• Kafka connect worker was down once during testing
• Cannot find scratch event in admin tool
• Recoverable
16. Sometimes shit just happens
• Kafka was down at the last day of Moon festival campaign
• No scratch events coming in
• Not possible to redeem
• Why?
• We use self-hosted Kafka on our own K8S (to save time to
deploy)
• Run out of storage space
• How we recovered?
• This situation is not in our expectation
• Redeem events are the most important
• 狡兔真的真的要有三窟 (reconstruct redeem events from REDIS)