The document discusses Globo.com's recommendation platform that provides personalized recommendations to users. It uses several big data technologies like Hadoop, Kafka, HBase and Elasticsearch. Recommendations are generated through both pre-computed and real-time approaches. The platform also aims to add semantics to recommendations by linking entities and relationships through techniques like named entity recognition and knowledge graphs. This is expected to improve capabilities like finding, linking and organizing content.
From an idea to production: building a recommender for BBC SoundsTatiana Al-Chueyr
This presentation was given on the 28th of September 2021 at the first MLOps London Meetup
Event website: https://www.meetup.com/mlopslondon/events/280295841/
Kafka and GraphQL: Misconceptions and Connections | Gerard Klijs, Open WebHostedbyConfluent
GraphQL is a powerful way to bridge the gap between frontend and backend. Providing a typed API with introspection. This can be used for code generation or code completion.
Because of some misconceptions it can be hard to combine GraphQL with Kafka, but it doesn’t need to be hard.
After handling misconceptions like 'Kafka is just a message bus' and 'GraphQL is only for Graph databases' I will have a few demo's how both can be combined. It can be a clear way to make your event sourcing backend available to the frontend. GraphQL can be used to quickly setup a typed API, if you already use ksqlDB.
Deep learning algorithms have benefited greatly from the recent performance gains of GPUs. However, it has been unclear whether GPUs can speed up machine learning algorithms such as generalized linear modeling, random forests, gradient boosting machines, and clustering. H2O.ai, the leading open source AI company, is bringing the best-of-breed data science and machine learning algorithms to GPUs.
We introduce H2O4GPU, a fully featured machine learning library that is optimized for GPUs with a robust python API that is drop dead replacement for scikit-learn. We'll demonstrate benchmarks for the most common algorithms relevant to enterprise AI and showcase performance gains as compared to running on CPUs.
Jon’s Bio:
https://umdphysics.umd.edu/people/faculty/current/item/337-jcm.html
Please view the video here:
This presentation was part of an internal training session at Jahia to make people aware of GraphQL, and also shared the lessons learned while working with it. It is intended for audiences that have no prior knowledge of GraphQL.
From an idea to production: building a recommender for BBC SoundsTatiana Al-Chueyr
This presentation was given on the 28th of September 2021 at the first MLOps London Meetup
Event website: https://www.meetup.com/mlopslondon/events/280295841/
Kafka and GraphQL: Misconceptions and Connections | Gerard Klijs, Open WebHostedbyConfluent
GraphQL is a powerful way to bridge the gap between frontend and backend. Providing a typed API with introspection. This can be used for code generation or code completion.
Because of some misconceptions it can be hard to combine GraphQL with Kafka, but it doesn’t need to be hard.
After handling misconceptions like 'Kafka is just a message bus' and 'GraphQL is only for Graph databases' I will have a few demo's how both can be combined. It can be a clear way to make your event sourcing backend available to the frontend. GraphQL can be used to quickly setup a typed API, if you already use ksqlDB.
Deep learning algorithms have benefited greatly from the recent performance gains of GPUs. However, it has been unclear whether GPUs can speed up machine learning algorithms such as generalized linear modeling, random forests, gradient boosting machines, and clustering. H2O.ai, the leading open source AI company, is bringing the best-of-breed data science and machine learning algorithms to GPUs.
We introduce H2O4GPU, a fully featured machine learning library that is optimized for GPUs with a robust python API that is drop dead replacement for scikit-learn. We'll demonstrate benchmarks for the most common algorithms relevant to enterprise AI and showcase performance gains as compared to running on CPUs.
Jon’s Bio:
https://umdphysics.umd.edu/people/faculty/current/item/337-jcm.html
Please view the video here:
This presentation was part of an internal training session at Jahia to make people aware of GraphQL, and also shared the lessons learned while working with it. It is intended for audiences that have no prior knowledge of GraphQL.
Did you know that the best way to build a REST API is with an RPC framework? We’ll look at how Google and other large API producers use gRPC to build REST APIs that users love because they follow the OpenAPI Specification and that producers love because gRPC gives them power and scaling.
Implementing OpenAPI and GraphQL services with gRPCTim Burks
Behind every API there's code. REST and GraphQL are powerful interface abstractions but are not so great for writing code (we’re still looking for the programming language where every command is a GET, POST, PUT, or DELETE). When programmers work, they are usually making function calls, and an RPC framework like gRPC allows those functions to be written in a mixture of languages and distributed among many servers. This means that gRPC can be a great way to implement REST and GraphQL APIs at scale. We’ll share open source projects from Google that can be used to implement OpenAPI and GraphQL services with gRPC and give you hands-on experience with both.
Presented at the 2019 API Specifications Conference.
https://asc2019.sched.com/event/T6u9/workshop-implementing-openapi-and-graphql-services-with-grpc-tim-burks-google
GraphQL across the stack: How everything fits togetherSashko Stubailo
My talk from GraphQL Summit 2017!
In this talk, I talk about a future for GraphQL which builds on the idea that GraphQL enables lots of tools to work together seamlessly across the stack. I present this through the lens of 3 examples: Caching, performance tracing, and schema stitching.
Stay tuned for the video recording from GraphQL Summit!
GraphQL - The new "Lingua Franca" for API-Developmentjexp
Three years ago, with the release of the GraphQL specification, Facebook took a fresh stab at the topic of "API design between remote services and applications." The key aspects of GraphQL provide a common, schema-based, domain-specific language and flexible, dynamic queries at interface boundaries.
In the talk, I'd like to compare GraphQL and REST and showcase benefits for developers and architects using a concrete example in application and API development, data source and system integration.
Vibe Koli 2019 - Utazás az egyetem padjaitól a Google Developer ExpertigMárton Kodok
VIBE Koli 2019 - Vibe Garázs - Gokart.
Kodok Márton, miután elvégezte tanulmányait a Sapientián, IT-s karriert épített ki magának, ma pedig már tagja a Google Developer Expert (GDE) csapatának, így az ország kiemelkedő szakemberei közé tartozik. A VIBE Kolin abban segít neked, hogy megtaláld a saját utad. Bebizonyítja, csak akaraterő kell ahhoz, hogy egy társadhoz képest mást, többet csinálj.
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Holden Karau
Slides from PyData London exploring how the big data ecosystem (currently) works together as well as how different parts of the ecosystem work with Python. Proof-of-concept examples are provided using nltk & spacy with Spark. Then we look to the future and how we can improve.
Getting Started Contributing to Apache Spark – From PR, CR, JIRA, and BeyondDatabricks
With the community working on preparing the next versions of Apache Spark you may be asking yourself ‘how do I get involved in contributing to this?’ With such a large volume of contributions, it can be hard to know how to begin contributing yourself. Holden Karau offers a developer-focused head start, walking you through how to find good issues, formatting code, finding reviewers, and what to expect in the code review process. In addition to looking at how to contribute code we explore some of the other ways you can contribute to to Apache Spark from helping test release candidates, to doing the all important code reviews, bug triage, and many more (like answering questions).
As presented at DevDuck #3 - JavaScript meetup for developers (www.devduck.pl)
-----
Get know more about GraphQL
-----
Looking for a company to build you an electron desktop app? www.brainhub.eu
Tackling Python: What is it and How Can it Help with Technical SEO? | TechSEO...Ruth Everett
Python has risen in popularity over the last few years, so much so that it has become one of the most talked about and widely-adopted programming languages. But why should technical SEOs care about Python?
Hadoop or Spark: is it an either-or proposition? By Slim BaltagiSlim Baltagi
Hadoop or Spark: is it an either-or proposition? An exodus away from Hadoop to Spark is picking up steam in the news headlines and talks! Away from marketing fluff and politics, this talk analyzes such news and claims from a technical perspective.
In practical ways, while referring to components and tools from both Hadoop and Spark ecosystems, this talk will show that the relationship between Hadoop and Spark is not of an either-or type but can take different forms such as: evolution, transition, integration, alternation and complementarity.
Accelerating Big Data beyond the JVM - Fosdem 2018Holden Karau
Many popular big data technologies (such as Apache Spark, BEAM, Flink, and Kafka) are built in the JVM, and many interesting tools are built in other languages (ranging from Python to CUDA). For simple operations the cost of copying the data can quickly dominate, and in complex cases can limit our ability to take advantage of specialty hardware. This talk explores how improved formats are being integrated to reduce these hurdles to co-operation.
Many popular big data technologies (such as Apache Spark, BEAM, and Flink) are built in the JVM, and many interesting AI tools are built in other languages, and some requiring copying to the GPU. As many folks have experienced, while we may wish that we spend all of our time playing with cool algorithms -- we often need to spend more of our time working on data prep. Having to copy our data slowly between the JVM and the target language of computation can remove much of the benefit of being able to access our specialized tooling. Thankfully, as illustrated in the soon to be released Spark 2.3, Apache Arrow and related tools offer the ability to reduce this overhead. This talk will explore how Arrow is being integrated into Spark, and how it can be integrated into other systems, but also limitations and places where Apache Arrow will not magically save us.
Link: https://fosdem.org/2018/schedule/event/big_data_outside_jvm/
How to build and run a big data platform in the 21st centuryAli Dasdan
This tutorial was presented in the IEEE Big Data Conference in 2019. It shows that building and running a big data platform for both real-time streaming and batch data processing for all kinds of applications involving analytics, data science, reporting, and the like in today’s world can be as easy as following a checklist. We live in a fortunate time that many of the components needed are already available in the open source or as a service from commercial vendors. This tutorial shows how to put these components together in multiple sophistication levels to cover the spectrum from a basic reporting need to a full fledged operation across geographically distributed regions with business continuity measures in place. This tutorial provides enough information and checklists to the audience that it can also serve as a goto reference in the actual process of building and running.
DSpace is a free, open-source software application for creating institutional repositories. Out of the box DSpace is a rather simple, uninspired repository, but with a bit of work it can be turned into a workhorse for disseminating institutional knowledge. By committing to using DSpace as the canonical location for institutional outputs, you can focus on standardizing metadata taxonomies and carefully curating content, then leverage application program interfaces (APIs) to integrate it with other services. This strategy is more efficient, reduces duplication of outputs, and increases the potential impact of institutional knowledge through syndication, harvesting, etc.
This slide deck has been prepared for a workshop on Linked Data Publishing and Semantic Processing using the Redlink platform (http://redlink.co). The workshop delivered at the Department of Information Engineering, Computer Science and Mathematics at Università degli Studi dell'Aquila aimed at providing a general understanding of Semantic Web Technologies and how these can be used in real world use cases such as Salzburgerland Tourismus.
A brief introduction has been also included on MICO (Media in Context) a European Union part-funded research project to provide cross-media analysis solutions for online multimedia producers.
Did you know that the best way to build a REST API is with an RPC framework? We’ll look at how Google and other large API producers use gRPC to build REST APIs that users love because they follow the OpenAPI Specification and that producers love because gRPC gives them power and scaling.
Implementing OpenAPI and GraphQL services with gRPCTim Burks
Behind every API there's code. REST and GraphQL are powerful interface abstractions but are not so great for writing code (we’re still looking for the programming language where every command is a GET, POST, PUT, or DELETE). When programmers work, they are usually making function calls, and an RPC framework like gRPC allows those functions to be written in a mixture of languages and distributed among many servers. This means that gRPC can be a great way to implement REST and GraphQL APIs at scale. We’ll share open source projects from Google that can be used to implement OpenAPI and GraphQL services with gRPC and give you hands-on experience with both.
Presented at the 2019 API Specifications Conference.
https://asc2019.sched.com/event/T6u9/workshop-implementing-openapi-and-graphql-services-with-grpc-tim-burks-google
GraphQL across the stack: How everything fits togetherSashko Stubailo
My talk from GraphQL Summit 2017!
In this talk, I talk about a future for GraphQL which builds on the idea that GraphQL enables lots of tools to work together seamlessly across the stack. I present this through the lens of 3 examples: Caching, performance tracing, and schema stitching.
Stay tuned for the video recording from GraphQL Summit!
GraphQL - The new "Lingua Franca" for API-Developmentjexp
Three years ago, with the release of the GraphQL specification, Facebook took a fresh stab at the topic of "API design between remote services and applications." The key aspects of GraphQL provide a common, schema-based, domain-specific language and flexible, dynamic queries at interface boundaries.
In the talk, I'd like to compare GraphQL and REST and showcase benefits for developers and architects using a concrete example in application and API development, data source and system integration.
Vibe Koli 2019 - Utazás az egyetem padjaitól a Google Developer ExpertigMárton Kodok
VIBE Koli 2019 - Vibe Garázs - Gokart.
Kodok Márton, miután elvégezte tanulmányait a Sapientián, IT-s karriert épített ki magának, ma pedig már tagja a Google Developer Expert (GDE) csapatának, így az ország kiemelkedő szakemberei közé tartozik. A VIBE Kolin abban segít neked, hogy megtaláld a saját utad. Bebizonyítja, csak akaraterő kell ahhoz, hogy egy társadhoz képest mást, többet csinálj.
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Holden Karau
Slides from PyData London exploring how the big data ecosystem (currently) works together as well as how different parts of the ecosystem work with Python. Proof-of-concept examples are provided using nltk & spacy with Spark. Then we look to the future and how we can improve.
Getting Started Contributing to Apache Spark – From PR, CR, JIRA, and BeyondDatabricks
With the community working on preparing the next versions of Apache Spark you may be asking yourself ‘how do I get involved in contributing to this?’ With such a large volume of contributions, it can be hard to know how to begin contributing yourself. Holden Karau offers a developer-focused head start, walking you through how to find good issues, formatting code, finding reviewers, and what to expect in the code review process. In addition to looking at how to contribute code we explore some of the other ways you can contribute to to Apache Spark from helping test release candidates, to doing the all important code reviews, bug triage, and many more (like answering questions).
As presented at DevDuck #3 - JavaScript meetup for developers (www.devduck.pl)
-----
Get know more about GraphQL
-----
Looking for a company to build you an electron desktop app? www.brainhub.eu
Tackling Python: What is it and How Can it Help with Technical SEO? | TechSEO...Ruth Everett
Python has risen in popularity over the last few years, so much so that it has become one of the most talked about and widely-adopted programming languages. But why should technical SEOs care about Python?
Hadoop or Spark: is it an either-or proposition? By Slim BaltagiSlim Baltagi
Hadoop or Spark: is it an either-or proposition? An exodus away from Hadoop to Spark is picking up steam in the news headlines and talks! Away from marketing fluff and politics, this talk analyzes such news and claims from a technical perspective.
In practical ways, while referring to components and tools from both Hadoop and Spark ecosystems, this talk will show that the relationship between Hadoop and Spark is not of an either-or type but can take different forms such as: evolution, transition, integration, alternation and complementarity.
Accelerating Big Data beyond the JVM - Fosdem 2018Holden Karau
Many popular big data technologies (such as Apache Spark, BEAM, Flink, and Kafka) are built in the JVM, and many interesting tools are built in other languages (ranging from Python to CUDA). For simple operations the cost of copying the data can quickly dominate, and in complex cases can limit our ability to take advantage of specialty hardware. This talk explores how improved formats are being integrated to reduce these hurdles to co-operation.
Many popular big data technologies (such as Apache Spark, BEAM, and Flink) are built in the JVM, and many interesting AI tools are built in other languages, and some requiring copying to the GPU. As many folks have experienced, while we may wish that we spend all of our time playing with cool algorithms -- we often need to spend more of our time working on data prep. Having to copy our data slowly between the JVM and the target language of computation can remove much of the benefit of being able to access our specialized tooling. Thankfully, as illustrated in the soon to be released Spark 2.3, Apache Arrow and related tools offer the ability to reduce this overhead. This talk will explore how Arrow is being integrated into Spark, and how it can be integrated into other systems, but also limitations and places where Apache Arrow will not magically save us.
Link: https://fosdem.org/2018/schedule/event/big_data_outside_jvm/
How to build and run a big data platform in the 21st centuryAli Dasdan
This tutorial was presented in the IEEE Big Data Conference in 2019. It shows that building and running a big data platform for both real-time streaming and batch data processing for all kinds of applications involving analytics, data science, reporting, and the like in today’s world can be as easy as following a checklist. We live in a fortunate time that many of the components needed are already available in the open source or as a service from commercial vendors. This tutorial shows how to put these components together in multiple sophistication levels to cover the spectrum from a basic reporting need to a full fledged operation across geographically distributed regions with business continuity measures in place. This tutorial provides enough information and checklists to the audience that it can also serve as a goto reference in the actual process of building and running.
DSpace is a free, open-source software application for creating institutional repositories. Out of the box DSpace is a rather simple, uninspired repository, but with a bit of work it can be turned into a workhorse for disseminating institutional knowledge. By committing to using DSpace as the canonical location for institutional outputs, you can focus on standardizing metadata taxonomies and carefully curating content, then leverage application program interfaces (APIs) to integrate it with other services. This strategy is more efficient, reduces duplication of outputs, and increases the potential impact of institutional knowledge through syndication, harvesting, etc.
This slide deck has been prepared for a workshop on Linked Data Publishing and Semantic Processing using the Redlink platform (http://redlink.co). The workshop delivered at the Department of Information Engineering, Computer Science and Mathematics at Università degli Studi dell'Aquila aimed at providing a general understanding of Semantic Web Technologies and how these can be used in real world use cases such as Salzburgerland Tourismus.
A brief introduction has been also included on MICO (Media in Context) a European Union part-funded research project to provide cross-media analysis solutions for online multimedia producers.
SharePoint 2016 & Office 365: A Look Ahead To What's Coming - SPS VancouverRichard Harbridge
With SharePoint 2016 around the corner and Office 365 constantly releasing new functionality it can be hard to feel ready for what will come over the next few years. Where should we invest in learning? What other technologies should we understand? Why are some things changing?
Join Richard Harbridge as he explores Technology roadmaps and industry trends and how Microsoft and many customers are planning for the challenges ahead.
Given at DevNation 2014, this presentation provides a high level overview for developers about why user experience practices should be a part of every project they undertake.
Through a focus on user-centric design practices, usability testing, and visual design - developers can provide a first-class application that meets and exceeds their user's needs the first time, rather than undergoing serious re-writes of applications due to misunderstandings between project stakeholders and users.
Keynote Open Source Diversity - Festival del Software LibreHolden Karau
Exploring diversity in open source communities @ Festival del Software Libre in Mexico. We looked at the ASF and general github data as well as discussed a new program to encourage more people from Mexico to get more involved in open source.
Building search and discovery services for Schibsted (LSRS '17)Sandra Garcia
Presentation given at the Large Scale Recommender Systems workshop (LSRS) in Recsys 2017.
This presentation describes the search and discovery products we are working on in Schibsted for the domains of news and marketplaces as well as the challenges within each of these domains. It also covers how we bring these services into production including the system architecture and deployment process.
ApacheCon NA 2019 : Customer segmentation and personalization using apache unomiSerge Huber
In this session, you will learn all that’s new with Apache Unomi, the open source Customer Data platform (that graduated this year) based on the Apache Karaf runtime, and all that’s happened since the last ApacheCon. You will discover how to easily integrate it with an existing website or SPA/PWA using its built-in web tracker, how to build customer segments and how to use the API to personalize the experience for your users. You’ll also learn how you can extend it to do almost anything, using either the built-in rules engine or your own plugins. You will also discover the new Docker compatibility and the upcoming GraphQL API. Finally, you’ll learn what’s next and how you can help the project.
Talk given at the London AICamp meet up on the 13 July 2023. It's an introduction on building open-source ChatGPT-like chat bots and some of the considerations to have while training/tuning them using Airflow.
Presentation given on the 21st of September 2021 at the London Beam Meet-up
Event website: https://www.meetup.com/London-Apache-Beam-Meetup/events/280442419/
Presentation given on the 15th July 2021 at the Airflow Summit 2021
Conference website: https://airflowsummit.org/sessions/2021/clearing-airflow-obstructions/
Recording: https://www.crowdcast.io/e/airflowsummit2021/40
Artificial intelligence breaks into our lives. In the future, everything will probably be clear, but so far, some questions have arisen, and increasingly these issues affect aspects of morality and ethics. Which principles do we need to keep in mind while surfacing machine learning algorithms? How the editorial team affects the day to day development of applications at BBC?
Place: Kharkiv National University of Radio Electronics, Ukraine
When: 17th November 2019.
Presented at PyCon UK 2018 (18 September 2018, Cardiff).
The slides are incomplete.
Recording available at:
https://www.youtube.com/watch?v=-weU0Zy4Yd8
Presentation about some common mistakes English learners make - and how it is possible to try to identify part of them automatically (spelling, capitalization and article). This presentation was made during PyCon SK on the 12th of March 2016. Many of the results are due to the partnership of the University of Cambridge and Education First.
Slides presenting some numbers of PythonBrasil[8] conference (PyCon Brasil), that happened in Rio de Janeiro, during November 2012. Authors: @tati_alchueyr and @turicas
Desarollando aplicaciones móviles con Python y AndroidTatiana Al-Chueyr
Charla presentada en PyConAr 2011 (Junín, Argentina), acerca como desarollar aplicaciones móviles con Python y Android.
El código de ejemplo puede ser bajado en:
http://github.com/tatiana/pyandroid
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
QCon SP - recommended for you
1. Building a recommendation platform
with Hadoop and ElasticSearch
Tatiana Al-Chueyr Martins
@tati_alchueyr
tatiana.alchueyr@gmail.com
QCon Rio
24th September 2014
2. tati.__doc__
● Programmer since 2002
● Pythonist since 2003
● Open-source enthusiast
● Computer Engineer by UNICAMP
● Currently Senior Software Engineer at Globo.com
#oss #freesoftware #python #linux #android #bigdata #cloud
tati_alchueyr tatiana alchueyr
5. free services
"If you're not paying for it, you're not the
customer. You're the product being sold."
- SomeOne (in the past, apparently online)
6. free services
"If you're not paying for it, you're not the
customer. You're the product being sold."
- SomeOne (in the past, apparently online)
If you are paying, it’s no better.
Interesting comparison between paid and unpaid Yahoo!, Google and Facebook services:
http://revdancatt.com/2013/02/06/the-problem-with-if-youre-not-paying-youre-the-product/
10. online services advertisement
How do you choose in which website you will
advertise?
How do you measure how successful an
advertisement is?
11. online services advertisement
common metrics:
● conversion: # goal achievements / visits
● page-views: number of times a page is
viewed
Learn more about conversion in marketing:
http://en.wikipedia.org/wiki/Conversion_marketing
12. globo
● 2nd largest TV network in annual revenue worldwide
● produces 2,400 hours of entertainment per year
● produces 3,000 hours of journalism per year
● covers 98.6% of the Brazilian territory
● covers 99.5% of the Brazilian population
● US$ 7.2 billion annual revenue (2013)
More about Rede Globo:
http://en.wikipedia.org/wiki/Rede_Globo
18. globo.com daily social work
we help people finding the information they want
after all, we want to make people happy
19. globo.com daily social work
we help people finding the information they want
after all, we want to make people happy
so they keep surfing in our website…
22. what you are at globo.com
you are data to us
example of user representation (*) in JSON:
{
(1, 12359058102): {
"soccer team": "São Paulo",
"city": "Rio de Janeiro",
"likes": ["Python", "QCon", "Caelum"]
}
}
* simplified user representation for explanation-only
23. what you are at globo.com
what you tell us directly
● eg. firefox or chrome plugin
24. what you are at globo.com
what you tell us indirectly
● news you read
● videos you watch
● things you search
● quizzes you answer
● things you like
● things you comment
35. pre-computed platform
1. user performs an action
2. action data is logged into the HDFS (Apache Hadoop)
3. from times to times pig scripts run on a range of Hadoop
data, creating pre-computed recommendation for all users
4. recommendations are stored into Apache HBase
next time an user access globo.com:
- the recommendation is ready and retrieved from HBase
36. pre-computed platform
(1) pre-compute recommendations, from times to times based on users history
(2) next access after pre-computation, the user will see new recommendations
pre-compute
recommendations
for all users
simple query
37. real-time platform
1. user performs an action (event)
2. data is logged into the HDFS (Apache Hadoop)
3. every minute events’ data are sent to Apache Kafka
4. an internal process (Horizon) read events from Kafka and
writes into HBase
next time an user access globo.com:
- a process (Lex) identifies the user history and checks its
interest
- based on the user topics of interest, a query is run on
ElasticSearch
38. real-time platform
(1) store all raw data
(2) next access, a new query will be done to the database
raw
data
for all users
complex query
59. motivationNot only words
What is "São Paulo" in this news...?
a. City São Paulo
b. State São Paulo (SP)
c. Saint São Paulo
d. São Paulo Futebol Clube
e. Other: _____________
60. motivationNot only words
What is "São Paulo" in this news...?
a. http://dbpedia.org/resource/S%C3%A3o_Paulo
b. http://dbpedia.org/resource/S%C3%A3o_Paulo_(state)
c. http://dbpedia.org/resource/Paul_the_Apostle
d. http://dbpedia.org/resource/S%C3%A3o_Paulo_FC
e. Other: _____________
66. Isabella Nardoni foi morta em 29 de março de 2008
na Zona Norte de São Paulo (Foto:Reprodução)
Isabella de Oliveira Nardoni, de 5 anos,
foi morta na noite de 29 de março de
2008. A perícia concluiu que a menina foi
atirada do sexto andar do prédio onde
moravam seu pai, Alexandre Nardoni,
sua madrasta, Anna Carolina Jatobá, e
dois filhos pequenos do casal, na Vila
Isolina Mazzei, na zona norte de São
Paulo.
Túmulo de Isabella vira local de visitação em SP; casal Nardoni está preso.
Caso Isabella Nardoni
Juliana Cardilli G1 SP
RDF
FOAF
GEO
Dublin
Core
SKOS
Semantic markup in web pages
motivation
74. outcomes
● To replace words by entities improved:
○ Finding
○ Linking
○ Reconciling
○ Organizing
multiple layers of information
75. outcomes
● Flexible ways to organize content
● Ease to find related issues
● Explicit relations derived from annotated content
● Up-to-date topic pages with little editorial effort
● Linking content across different web products
● Seamless navigation leading to flow state
88. benefits of semantics in recsys
● more precision
● more diversity when compared to other
statistical algorithms used such as TF-IDF
● possibility of inferring categories and topics
that are not explicit in the text
90. Video developed by Rodrigo Senra and me at
Globo.com, related to our initial studies in this field
https://www.youtube.com/watch?v=6UW3frySEnc
benefits of semantics in recsys