lecture of Marco Tagliasacchi (Politecnico di Milano) for Summer School on Social Media Modeling and Search, and European Chapter of the ACM SIGMM event, supported by CUbRIK and Social Sensor Project.
10-14 September, Fira, Santorini, Greece in Santorini
This document provides an introduction and overview of NoSQL databases. It discusses that while NoSQL databases were created to solve specific pain points around scaling large amounts of data, many situations do not actually require a NoSQL solution. It then covers some common distribution models for NoSQL databases like replication, sharding, and consistent hashing, and provides examples of companies that developed NoSQL databases to solve their particular data problems.
The document provides a checklist of 37 personal documents organized into categories including source documents, company details, educational qualifications, last working details, ID proofs, and other documents. The checklist includes documents such as photographs, resume, test papers, offer letter, joining form, degree certificates, last salary slip, PAN card, and passport copy. It specifies the number of copies needed of each document.
This document appears to be an exam paper for a mechanics course. It contains 6 multiple part questions testing concepts in mechanics such as forces, kinematics, and dynamics. The questions provide contextual word problems and diagrams requiring students to set up and solve equations to find requested values. The exam paper provides space for work and answers and includes instructions for candidates on providing responses. It is signed and includes information for examiners.
Greg Lollback_Variation in biomass estimation among replicated PPBio PTER plo...TERN Australia
This document discusses measuring and estimating aboveground live biomass (AGLB) in native Australian forests. It finds that there is significant variation in AGLB among 32 replicated 1-hectare plots in a 910-hectare forest patch. AGLB across plots ranged from 26 to 248 tonnes per hectare, with a mean of 146.51 tonnes per hectare and standard deviation of 39.41 tonnes per hectare. Drivers of higher biomass included greater rainfall, lower maximum temperatures, deeper soils, and less frequent fires.
The document discusses a framework called CUbRIK that uses human computation to improve multimedia search. It presents a case study on using the crowd to detect trademark logos in videos. The framework involves designing crowd tasks, matching people to tasks, aggregating outputs, and evaluating performance. Experimental results show the crowd improves precision over automatic methods alone. Future work includes refining task design and better matching people to tasks.
Arzu Sahu is seeking a role as a Data Specialist or ETL Developer. She has 2.3 years of experience in IT with a focus on data warehousing, ETL development using Informatica and Oracle, and data quality management. Her skills include Informatica PowerCenter, Oracle, SQL, Unix, and HP Quality Center. She is currently an ETL Developer at IBM working on projects involving data migration and management for clients in banking, finance, and travel.
Cisco aironet 2800 and 3800 a ps, keep your connected world spinningIT Tech
The Cisco Aironet 2800 and 3800 series access points were designed to address bandwidth bottlenecks caused by the increased number of wireless devices used in modern workplaces. The access points support high-speed 802.11ac Wave 2 technology, multi-gigabit Ethernet backhaul, and features such as MU-MIMO and flexible radio assignment to improve network performance and client capacity for a high density of users and devices. The new access points provide up to 5.2Gbps of wireless bandwidth and are supported by the latest AirOS software.
This document provides an introduction and overview of NoSQL databases. It discusses that while NoSQL databases were created to solve specific pain points around scaling large amounts of data, many situations do not actually require a NoSQL solution. It then covers some common distribution models for NoSQL databases like replication, sharding, and consistent hashing, and provides examples of companies that developed NoSQL databases to solve their particular data problems.
The document provides a checklist of 37 personal documents organized into categories including source documents, company details, educational qualifications, last working details, ID proofs, and other documents. The checklist includes documents such as photographs, resume, test papers, offer letter, joining form, degree certificates, last salary slip, PAN card, and passport copy. It specifies the number of copies needed of each document.
This document appears to be an exam paper for a mechanics course. It contains 6 multiple part questions testing concepts in mechanics such as forces, kinematics, and dynamics. The questions provide contextual word problems and diagrams requiring students to set up and solve equations to find requested values. The exam paper provides space for work and answers and includes instructions for candidates on providing responses. It is signed and includes information for examiners.
Greg Lollback_Variation in biomass estimation among replicated PPBio PTER plo...TERN Australia
This document discusses measuring and estimating aboveground live biomass (AGLB) in native Australian forests. It finds that there is significant variation in AGLB among 32 replicated 1-hectare plots in a 910-hectare forest patch. AGLB across plots ranged from 26 to 248 tonnes per hectare, with a mean of 146.51 tonnes per hectare and standard deviation of 39.41 tonnes per hectare. Drivers of higher biomass included greater rainfall, lower maximum temperatures, deeper soils, and less frequent fires.
The document discusses a framework called CUbRIK that uses human computation to improve multimedia search. It presents a case study on using the crowd to detect trademark logos in videos. The framework involves designing crowd tasks, matching people to tasks, aggregating outputs, and evaluating performance. Experimental results show the crowd improves precision over automatic methods alone. Future work includes refining task design and better matching people to tasks.
Arzu Sahu is seeking a role as a Data Specialist or ETL Developer. She has 2.3 years of experience in IT with a focus on data warehousing, ETL development using Informatica and Oracle, and data quality management. Her skills include Informatica PowerCenter, Oracle, SQL, Unix, and HP Quality Center. She is currently an ETL Developer at IBM working on projects involving data migration and management for clients in banking, finance, and travel.
Cisco aironet 2800 and 3800 a ps, keep your connected world spinningIT Tech
The Cisco Aironet 2800 and 3800 series access points were designed to address bandwidth bottlenecks caused by the increased number of wireless devices used in modern workplaces. The access points support high-speed 802.11ac Wave 2 technology, multi-gigabit Ethernet backhaul, and features such as MU-MIMO and flexible radio assignment to improve network performance and client capacity for a high density of users and devices. The new access points provide up to 5.2Gbps of wireless bandwidth and are supported by the latest AirOS software.
Matching Game Mechanics and Human Computation Tasks in Games with a PurposeCUbRIK Project
The document discusses using game mechanics to design Games with a Purpose (GWAPs) to solve human computation tasks, outlines a development process for GWAPs including defining the task and matching it to appropriate game mechanics, and provides an example of using line drawing mechanics to segment fashion images and identify trends.
CUbRIK application for Digital Humanities illustrated during the demo session of the International Workshop on Multimedia Signal Processing IEEE MMSP 2013
The document discusses the CUbRIK project which aims to reconstruct social networks through historical sources using a combination of automated and human-powered techniques. It outlines four pillars of the project: connecting to researcher needs, creating a structured repository, developing an efficient indexing process, and tools for analysis and visualization. Key challenges include identifying entities, verifying identities over time, analyzing relationships, and ensuring rights compliance. The project will utilize both clickworkers and subject experts to verify entity detections and annotations. It aims to represent the ambiguities of history rather than a single truth.
Building a social graph for the history of Europe: the CUbRIK histoGraphCUbRIK Project
The document discusses building a social graph from historical image collections. It describes the CVCE and Digital Humanities Lab, and their vision of creating a social graph from images. The CUbRIK approach involves sourcing researcher requirements, building an entity repository, efficient indexing, and tools for visualization and analysis. Challenges include identifying people, places and events in images over time and verifying these. The approach involves crowd-sourcing verification and integrating rights management. An evaluation phase is planned in July to test the social graph prototype.
histoGraph was developed as part of the EU-funded CUbRIK project to create an interface for accessing historical sources and discovering links between entities. It builds a social graph of people in photos of European integration history by having humans and AI work together to identify faces, which are then linked based on co-occurrence. Users can interact with the graph to explore connections between individuals and supporting documents. The system represents the complexity of truth in the humanities by allowing multiple answers to identity questions and facilitating discussion between experts.
The CUbRIK Social Graph Visual Interface. A component developed to represent dependencies of a given person in a given context, by analysing the co-occurrencies of person entities in photographs.
Mining Emotions in Short Films: User Comments or Crowdsourcing?CUbRIK Project
This document discusses mining emotions from user comments on short films. It presents an approach that creates an emotion vector for each short film based on extracting terms from user comments on YouTube and associating them with emotions from the NRC Emotion Lexicon. It then compares the cosine similarity between emotion vectors built from expert judgments and those built using Amazon Mechanical Turk workers or automatically from YouTube comments. The goal is to determine if crowdsourcing or YouTube comments can accurately extract emotions expressed in reviews of short films.
This document discusses game design, playtesting, and games with a purpose. It begins with introducing the speaker and their background in robotics, AI, game design, and crowdsourcing. The agenda then covers the differences between play and games, pointers to game design including key elements like players, objectives, procedures, rules, and outcomes. Games with a purpose are introduced as games that generate useful data as a byproduct of play. Examples of specific games are discussed and the process of validating gameplay through playtesting is covered. Traditional playtesting methods like observation, surveys and their issues are also outlined.
CUbRIK Research at CIKM 2012: Efficient Jaccard-based Diversity Analysis of L...CUbRIK Project
Presentation at CIKM 2013 of the CUbRIK research paper: "Efficient Jaccard-based Diversity Analysis of Large
Document Collections" authored by Fan Deng, Stefan Siersdorfer and Sergej Zerr of L3S Research Center, partner of the CUbRIK Consortium.
CUbRIK Tutorial at ICWE 2013: part 2 - Introduction to Games with a PurposeCUbRIK Project
2013, 08 July
Part 2 of the tutorial illustrated at ICWE 2013, by Luca Galli (Politecnico di Milano)
Crowdsourcing and human computation are novel disciplines that enable the design of computation processes that include humans as actors for task execution. In such a context, Games With a Purpose are an effective mean to channel, in a constructive manner, the human brainpower required to perform tasks that computers are unable to perform, through computer games. This tutorial introduces the core research questions in human computation, with a specific focus on the techniques required to manage structured and unstructured data. The second half of the tutorial delves into the field of game design for serious task, with an emphasis on games for human computation purposes. Our goal is to provide participants with a wide, yet complete overview of the research landscape; we aim at giving practitioners a solid understanding of the best practices in designing and running human computation tasks, while providing academics with solid references and, possibly, promising ideas for their future research activities.
CUbRIK tutorial at ICWE 2013: part 1 Introduction to Human ComputationCUbRIK Project
2013, July 8
Part 1 of the tutorial illustrated at ICWE 2013, by Alessandro Bozzon (Delft University of Technology)
Crowdsourcing and human computation are novel disciplines that enable the design of computation processes that include humans as actors for task execution. In such a context, Games With a Purpose are an effective mean to channel, in a constructive manner, the human brainpower required to perform tasks that computers are unable to perform, through computer games. This tutorial introduces the core research questions in human computation, with a specific focus on the techniques required to manage structured and unstructured data. The second half of the tutorial delves into the field of game design for serious task, with an emphasis on games for human computation purposes. Our goal is to provide participants with a wide, yet complete overview of the research landscape; we aim at giving practitioners a solid understanding of the best practices in designing and running human computation tasks, while providing academics with solid references and, possibly, promising ideas for their future research activities.
Presentation made at INSPIRE 2013, in the Semantics session, by Feroz Farazi, of University of Trento.
The research leading to these results has received funding from the CUbRIK Collaborative Project, partially funded by the European Commission's 7th Framework ICT
Programme for Research and Technological Development under the Grant agreement no. 287704.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Matching Game Mechanics and Human Computation Tasks in Games with a PurposeCUbRIK Project
The document discusses using game mechanics to design Games with a Purpose (GWAPs) to solve human computation tasks, outlines a development process for GWAPs including defining the task and matching it to appropriate game mechanics, and provides an example of using line drawing mechanics to segment fashion images and identify trends.
CUbRIK application for Digital Humanities illustrated during the demo session of the International Workshop on Multimedia Signal Processing IEEE MMSP 2013
The document discusses the CUbRIK project which aims to reconstruct social networks through historical sources using a combination of automated and human-powered techniques. It outlines four pillars of the project: connecting to researcher needs, creating a structured repository, developing an efficient indexing process, and tools for analysis and visualization. Key challenges include identifying entities, verifying identities over time, analyzing relationships, and ensuring rights compliance. The project will utilize both clickworkers and subject experts to verify entity detections and annotations. It aims to represent the ambiguities of history rather than a single truth.
Building a social graph for the history of Europe: the CUbRIK histoGraphCUbRIK Project
The document discusses building a social graph from historical image collections. It describes the CVCE and Digital Humanities Lab, and their vision of creating a social graph from images. The CUbRIK approach involves sourcing researcher requirements, building an entity repository, efficient indexing, and tools for visualization and analysis. Challenges include identifying people, places and events in images over time and verifying these. The approach involves crowd-sourcing verification and integrating rights management. An evaluation phase is planned in July to test the social graph prototype.
histoGraph was developed as part of the EU-funded CUbRIK project to create an interface for accessing historical sources and discovering links between entities. It builds a social graph of people in photos of European integration history by having humans and AI work together to identify faces, which are then linked based on co-occurrence. Users can interact with the graph to explore connections between individuals and supporting documents. The system represents the complexity of truth in the humanities by allowing multiple answers to identity questions and facilitating discussion between experts.
The CUbRIK Social Graph Visual Interface. A component developed to represent dependencies of a given person in a given context, by analysing the co-occurrencies of person entities in photographs.
Mining Emotions in Short Films: User Comments or Crowdsourcing?CUbRIK Project
This document discusses mining emotions from user comments on short films. It presents an approach that creates an emotion vector for each short film based on extracting terms from user comments on YouTube and associating them with emotions from the NRC Emotion Lexicon. It then compares the cosine similarity between emotion vectors built from expert judgments and those built using Amazon Mechanical Turk workers or automatically from YouTube comments. The goal is to determine if crowdsourcing or YouTube comments can accurately extract emotions expressed in reviews of short films.
This document discusses game design, playtesting, and games with a purpose. It begins with introducing the speaker and their background in robotics, AI, game design, and crowdsourcing. The agenda then covers the differences between play and games, pointers to game design including key elements like players, objectives, procedures, rules, and outcomes. Games with a purpose are introduced as games that generate useful data as a byproduct of play. Examples of specific games are discussed and the process of validating gameplay through playtesting is covered. Traditional playtesting methods like observation, surveys and their issues are also outlined.
CUbRIK Research at CIKM 2012: Efficient Jaccard-based Diversity Analysis of L...CUbRIK Project
Presentation at CIKM 2013 of the CUbRIK research paper: "Efficient Jaccard-based Diversity Analysis of Large
Document Collections" authored by Fan Deng, Stefan Siersdorfer and Sergej Zerr of L3S Research Center, partner of the CUbRIK Consortium.
CUbRIK Tutorial at ICWE 2013: part 2 - Introduction to Games with a PurposeCUbRIK Project
2013, 08 July
Part 2 of the tutorial illustrated at ICWE 2013, by Luca Galli (Politecnico di Milano)
Crowdsourcing and human computation are novel disciplines that enable the design of computation processes that include humans as actors for task execution. In such a context, Games With a Purpose are an effective mean to channel, in a constructive manner, the human brainpower required to perform tasks that computers are unable to perform, through computer games. This tutorial introduces the core research questions in human computation, with a specific focus on the techniques required to manage structured and unstructured data. The second half of the tutorial delves into the field of game design for serious task, with an emphasis on games for human computation purposes. Our goal is to provide participants with a wide, yet complete overview of the research landscape; we aim at giving practitioners a solid understanding of the best practices in designing and running human computation tasks, while providing academics with solid references and, possibly, promising ideas for their future research activities.
CUbRIK tutorial at ICWE 2013: part 1 Introduction to Human ComputationCUbRIK Project
2013, July 8
Part 1 of the tutorial illustrated at ICWE 2013, by Alessandro Bozzon (Delft University of Technology)
Crowdsourcing and human computation are novel disciplines that enable the design of computation processes that include humans as actors for task execution. In such a context, Games With a Purpose are an effective mean to channel, in a constructive manner, the human brainpower required to perform tasks that computers are unable to perform, through computer games. This tutorial introduces the core research questions in human computation, with a specific focus on the techniques required to manage structured and unstructured data. The second half of the tutorial delves into the field of game design for serious task, with an emphasis on games for human computation purposes. Our goal is to provide participants with a wide, yet complete overview of the research landscape; we aim at giving practitioners a solid understanding of the best practices in designing and running human computation tasks, while providing academics with solid references and, possibly, promising ideas for their future research activities.
Presentation made at INSPIRE 2013, in the Semantics session, by Feroz Farazi, of University of Trento.
The research leading to these results has received funding from the CUbRIK Collaborative Project, partially funded by the European Commission's 7th Framework ICT
Programme for Research and Technological Development under the Grant agreement no. 287704.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
4. +
Crowdsourcing
n Crowdsourcing
is
an
example
of
human
compu+ng
n Use
an
online
community
of
human
workers
to
complete
useful
tasks
n The
task
is
outsourced
to
an
undefined
public
n Main
idea:
design
tasks
that
are
n Easy
for
humans
n Hard
for
machines
6. +
Applica0ons
in
mul0media
retrieval
n Create
annotated
data
sets
for
training
n Reduces
both
cost
and
0me
needed
to
gather
annota0ons,
n …but
annota0ons
might
be
noisy!
n
Validate
the
output
of
mul0media
retrieval
systems
n Query
expansion
/
reformula0on
7. +
Crea0ng
annotated
training
sets
[Sorokin
and
Forsyth,
2008]
n Collect
annota0ons
for
computer
vision
data
sets
n people
segmenta0on
Protocol 1
Protocol 1
Protocol 2
Protocol 2
8. Proto
+
Crea0ng
annotated
training
sets
[Sorokin
and
Forsyth,
2008]
Protocol 2
n Collect
annota0ons
for
computer
vision
data
sets
n people
segmenta0on
and
pose
annota0on
Protocol 3
Protocol 4
Figure 1. Example results show the example results obtained from the annotation experiments. The first column is the implementation of
9. +
Experiment 3: trace the boundary of the person.
1
0.8
Crea0ng
annotated
training
sets
area(XOR)/area(AND). The lower the better. Mean 0.21, std 0.14, median 0.16
knee
A
[Sorokin
and
Forsyth,
2008]
0.6 G B
0.4 F
E
C D
0.2 A B
0
0 50 100 150 200 250 300
n Observa0ons:
C n Annotators
make
errors
D E F G
n Quality
of
annotators
is
heterogeneous
n The
quality
of
the
annota0ons
depends
on
the
difficulty
of
the
task
Experiment 4: click on 14 landmarks
50
Mean error in pixels between annotation points. The lower the better. Mean 8.71, std 6.29, median 7.35.
40
14 12
12 7
7
14
14
9 13 11
11
1310
1310
30 10
figure 6 9 8
8
9
8 8
8
7
knee
9
9 14
14
7
7 13
13
G 11
13 rWrist
10
10
rHip
20 rAnkle A 3 4
12 B 13
3
3 4
4
C 11
11
13 13
Neck
rElbow 12
12 lHip
12 F rKnee 2
2 5
5
4
4 3
3 12 12
D E 2
5
12
lElbow rShoulder Head
10 B C lKnee 6
6 5
5
2
2
A 11
1 11
lWrist 1
11
lShoulder 11
lAnkle 6 1
1
6
6 1
10 10 10 10
0
0 50 100 150 200
9 250 300 350 9 9 9
8 8 8 8
7 7 7 7
14
14 7
7
14
14
14
10
10 13
13 6 14
14
6 6 6
9 8
8
8 14
14
9
12
12 10
10
10
13
13 13
13
11
11 9
9
13
5
13 9
9 10
10 13 5 5
10
10 5 9
9
9
11
11
11
11
11
4
4
12
3
3
8
8 4 8
8 11
11
4
3
3
3 12
12
12
4 4
D E F G
12 12 4
4
3
3
4
4 12 4
7
7
7
7 3 7
7 3 3
3 8 7
3
3
100 4
4
5
5
110 120 130 140
150 160
5
5
170 180 190
200 100 110 120 130 140 150 160
88
5 170
5
5 180 190 200 100 110 120 130 140 150 160 170 180 190 200 100 110 120 130 140
5
5 2
2
2
2
2
2 2
2
2
6
6 6
6
1
1 1
1
1
1
1
6
6 1
1 6
6
6
Figure 6. Quality details per landmark. We present analysis of annotation quality per landmark in experiment 4. We
Figure 5. Quality details. We presentbest pair forof annotation quality forbetween 35th4. For every image the best fitting between points “C” and
detailed analysis all annotations experiments 3 and and 65th percentiles - “E” of experiment 4 in fig. 5.
pair of annotations is selected. The score of the best pair is shown in the figure. For experiment 3 we score annotations by the area of
their symmetric difference (XOR) divided bysame scale:union(OR). For experimenttowe compute the average distance between the
the the area of their from image 100 4 200 on horizontal axis and from 3 pixels to 13 pixels of error on the vertical axis. T
10. +
Crea0ng
annotated
training
sets
[Soleymani
and
Larson,
2010]
n MediaEval
2010
Affect
Task
n Use
of
Amazon
Mechanical
Turk
to
annotate
the
Affect
Task
Corpus
n 126
videos
(2-‐5
mins
in
length)
n Annotate
n Mood
(e.g.,
pleased,
helpless,
energe0c,
etc.)
n Emo0on
(e.g.,
sadness,
joy,
anger,
etc.)
n Boreness
(nine
point
ra0ng
scale)
n Like
(nine
point
ra0ng
scale)
11. +
Crea0ng
annotated
training
sets
[Nowak
and
Ruger.,
2010]
n Crowdsourcing
image
concepts.
53
concepts,
e.g.,
n Abstract
categories:
pPlace contains threehmutual exclusive concepts, namely In-
artylife,
beach
olidays,
snow,
etc.
3.3.1 Design of HIT Template
door, Outdoor and No Visual Place. In contrast several op- The design of the HITs at MTurk for the im
n Time
of
the
day:
day,
tional concepts belongue
the category Landscape Elements.
night,
no
visual
c to tion task is similar to the annotation tool that w
The task of the annotators was to choose exactly one concept to the expert annotators (see Sec. 3.2). Each H
n …
for categories with mutual exclusive concepts and to select of the annotation of one image with all applica
all applicable concepts for optional designed concepts. All cepts. It is arranged as a question survey and
photos were annotated at an image-based level. The anno- into three sections. The section Scene Descript
n Subset
of
99
images
from
the
ImageCLEF2009
dataset
tator tagged the whole image with all applicable concepts section Representation each contain four questio
and then continued with the next image. tion Pictured Objects consists of three questions
each section the image to be annotated is pres
repetition of the image ensures that the turke
while answering the questions without scrolling
of the document. Fig. 2 illustrates the questi
section Representation.
Figure 1: Annotation tool that was used for the ac-
quisition of expert annotations.
12. +
Crea0ng
annotated
training
sets
[Nowak
and
Ruger.,
2010]
n Study
of
expert
and
non-‐expert
labeling
n Inter-‐annota0on
agreement
among
experts:
n very
high
n Influence
of
the
expert
ground
truth
on
concept-‐based
retrieval
ranking:
n very
limited
n Inter-‐annota0on
agreement
among
non-‐experts
n High,
although
not
as
good
as
among
experts
n Influence
of
averaged
annota0ons
(experts
vs.
non
experts)
on
concept-‐based
retrieval
ranking:
n Averaging
filters
out
noisy
non-‐expert
annota0ons
13. +
Crea0ng
annotated
training
sets
[Vondrick
et
al.,
2010]
n Crowdsourcing
object
tracking
in
video
4 C. Vondrick, D. Ramanan, D. Patterson
n Annotators
draw
bounding
boxes
Fig. 2: Our video labeling user interface. All previously labeled entities are shown
14. +
Crea0ng
annotated
training
sets
[Vondrick
et
al.,
2010]
n Annotators
label
the
enclosing
bounding
box
of
an
en0ty
every
T
frames
n Bounding
boxes
at
intermediate
0me
instants
are
interpolated
n Interes0ng
trade-‐off
between
n Cost
of
12 turk
workers
D. Ramanan, D. Patterson
M C. Vondrick,
n Cost
of
interpola0on
on
Amazon
EC2
cloud
(a) Field drills (b) Basketball players
15. +
Crea0ng
annotated
training
sets
ments between F and each of the other
e, every document was judged as more
4.1 HI T Design
The use of preference judgments is prone to have a very simple
[Urbano
et
al.,
2010]
which was judged equally similar (or HIT design (see Figure 4). We asked workers to listen to the
new segment appears to the left of F with
ed more relevant, and G is set up in the the two incipits to
r the second iteration, in the rightmost compare. Next, they were asked what variation was more similar
s needed because F and G were already to the original melody, allowing 3 options: A is more similar, B is
d be the pivot for the leftmost segment. more similar, and they are either equally similar or dissimilar. We
ged similar to B, but D and E are evalua0on
of
music
informa0on
retrieval
systems
they
n Goal:
judged as indicated them that if one melody was part of another one,
set up in a segment to the right of B. At had to be considered equally similar, so as to comply with the
rdered groups of relevance formed with original guidelines. As optional questions, they were asked for
n Use
crowdsourcing
amusicalalterna0ve
if o
experts
to
comments gor
Note that not all the 21 judgments were their
s
an
background, t any, and for create
round-‐
truths
of
par0ally
ordered
lists
aggregate every incipit (e.g. G is only suggestions to give us some feedback.
organized partially ordered list. Pivots for each
ace. Documents that have been pivots already
nts Preference Judgments
G, B, F C<F, D<F, E<F, A<F, G=F, B<F
B , F, G C=B, D>B, E>B, A=B
E , F, G C=A, D=E
D), (F, G) -
ents, the sample of rankings given to each
e than with the original method. Whenever
over another one, it would be given a rank
n case it was judged equally similar, a rank
its sample. With the original methodology,
anks given to an incipit could rangegreement
(92%
complete
+
par0al)
with
experts
n Good
a from 1
ch increases the variance of the samples.
eme, the two samples of rankings given to
s are the opposite and therefore have the
Mann-Whitney U tests can be used again
ank samples are different or not. Because
variable, the effect size is larger, which
16. +
Validate
the
output
of
MIR
systems
[Snoek
et
al.,
2010][Freiburg
et
al.,
2011]
n Search
engine
for
archival
rock
‘n’
roll
concert
video
n Use
of
crowdsourcing
to
improve,
extend
and
share
automa0cally
Audience
detected
concepts
in
video
fragments
Close-up Hands Pinkpop hat Keyboard Guitar player Singer Stage Pink
Drummer Over the shuolder
Figure 1: Eleven common concert concepts we detect automatically, and for which we collect user-feed
Audience Close-up Hands Pinkpop hat Keyboard Guitar player Drummer Over the shuolder Singer Stage Pinkpop logo
Figure 1: Eleven common concert concepts we detect automatically, and for which we collect user-feedback.
180
Excluded correct fragment labels
first exp
160 back. A
Crowdsourcing errors
140
vided t
a prefer
showed
Video Fragments
120
respond
100
gregatio
80 between
reliable
60
forced,
40
2%. Wi
crowdso
Figure 2: Timeline-based video player where col-
ored dots correspond to automated visual detection
20 tomated
results. Users can navigate directly to fragments of 0
can be e
interest by interaction with the colored dots, which >50% >60% >70% >80% >90% is an in
pop-up a feedback overlay as displayed in Figure 3. User-Feedback Agreement
Figure 2: Timeline-based video player where col- 6. AC
since 1970 at Landgraaf, the Netherlands. All music videos
Figure 4: Results for Experiment 2: Quality vs We th
17. +
Validate
the
output
of
MIR
systems Crowdsourcing Event Detection in YouTube Videos 3
[Steiner
et
al.,
2011]
through a combination of textual, visual, and behavioral analysis techniques. When
a user starts watching a video, three event detection processes start:
Visual Event Detection Process We detect shots in the video by visually analyzing its
content [19]. We do this with the help of a browser extension, i.e., the whole process
runs on the client-side using the modern HTML5 [12] JavaScript APIs of the <video>
and <canvas> elements. As soon as the shots have been detected, we offer the user the
n Propose
a
browser
extension
to
navigate
detected
events
in
videos
choice to quickly jump into a specific shot by clicking on a representative still frame.
n Visual
events
(shot
changes)
The detected named entitiesvideopresented to the
Occurrence Event Detection Process We analyze the available
NLP techniques, as outlined in [18]. are
metadata using
user in a list, and upon click via a timeline-like user interface allow for jumping into
n Occurrence
events
(analysis
of
metadata
by
means
of
NLP
to
detect
one of the shots where the named entity occurs.
named
en00es)
JavaScript Detection Processeachsoon asshotsvisualcount clicks been detected,
Interest-based Event
we attach event listeners to
As
of the
the
and
events have
on shots as an
n Interest-‐based
events
(click
counters
on
detected
visual
events)
expression of interest in those shots.
Fig. 2: Screenshot of the YouTube browser extension, showing the three different event
18. +
Validate
the
output
of
MIR
systems
[Goeau
et
al.,
2011]
n Visual
plant
species
iden0fica0on
n Based
on
local
visual
features
n Crowdsourced
valida0on
writing, 858 images were up
new users. These images a
with uniform background, o
background, and involve 15
set of 55 species. Note that
within ImageCLEF2011 pla
5. EVALUATION
Performances, basically i
rates, will be actually show
fline version connected to a d
an enjoying demo where an
leaves. Users would notice s
cation (around 2 seconds),
suggested in spite of the in
cases with occlusions or wit
Figure 1: GUI of the web application. a rough guide, a leave one
19. +
Validate
the
output
of
MIR
systems
[Yan
et
al.,
2010]
n CrowdSearch
combines
n Automated
image
search
n Local
processing
on
mobile
phones
+
backend
processing
n Real-‐0me
human
valida0on
of
search
results
n Amazon
Mechanical
Turk
n Studies
the
trade-‐off
in
terms
of
n Delay
man error and bias to maximize accuracy. To balance these !"#$%&'()*# +),-.-)/#&'()*#0 1"23.4)/#&5)3.-)/.6,&7)080
tradeoffs, CrowdSearch uses an adaptive algorithm that uses
n Accuracy
% $ # " !
delay and result prediction models of human responses to ju- )'(*+,( &'( &'( &'( &'( &'(
+9
n Cost
diciously use human validation. Once a candidate image is
validated, it is returned to the user as a valid search result. % $ # " !
)'(*+,( &'( -. &'( &'( &'(
+<
3. CROWDSOURCING FOR SEARCH
In this section,n More
on
this
later…
of the Ama-
% $ # " !
we first provide a background )'(*+,( -. -. -. -. -.
+;
zon Mechanical Turk (AMT). We then discuss several design
choices that we make while using crowdsourcing for image % $ # " !
validation including: 1) how to construct tasks such that )'(*+,( &'( -. -. &'( &'(
+:
they are likely to be answered quickly, 2) how to minimize
human error and bias, and 3) how to price a validation task
to minimize delay. Figure 2: Shown are an image search query, candi-
22. +
Annota0on
model
n A
set
of
objects
to
annotate
i = 1, . . . , I
n A
set
of
annotators
j = 1, . . . , J
n Types
of
annota0ons
n Binary
n Categorical
(mul0-‐class)
n Numerical
n Other
24. +
Aggrega0ng
annota0ons
n Majority
vo0ng
(baseline)
n For
each
object,
assign
the
label
that
received
the
largest
number
of
votes
n Aggrega0ng
annota0ons
n [Dawid
and
Skene,
1979]
n [Snow
et
al.,
2008]
n [Whitehill
et
al.,
2009]
n …
n Aggrega0ng
and
learning
n [Sheng
et
al.,
2008]
n [Donmez
et
al.,
2009]
n [Raykar
et
al.,
2010]
n …
25. +
Aggrega0ng
annota0ons
Majority
vo0ng
n Assume
that
j
n The
annotator
quality
is
independent
from
the
object
P (yi = yi ) = pj
n All
annotators
have
the
same
quality
pj = p
n The
integrated
quality
of
majority
vo0ng
using
I
2N
+
1
=
annotators
is
2N + 1
N
q = P (y M V = y) = p2N +1−i · (1 − p)i
i
l=0
26. +
Aggrega0ng
annota0ons
Majority
vo0ng
-q 1
0.9 p=1.0
ly.
p=0.9
Integrated quality
me 0.8
p=0.8
0.7
p=0.7
0.6 p=0.6
0.5 p=0.5
y) 0.4 p=0.4
me 0.3
, 0.2
U. 1 3 5 7 9 11 13
yi Number of labelers
is Figure 2: The relationship between integrated label-
ue ing quality, individual quality, and the number of la-
el belers.
27. +
Aggrega0ng
annota0ons
[Snow
et
al.,
2008]
j
n Binary
labels:
yi ∈ {0, 1}
n The
true
label
is
es0mated
evalua0ng
the
posterior
log-‐odds,
i.e.,
1 J
P (yi = 1|yi , . . . , yi )
log 1 J
P (yi = 0|yi , . . . , yi )
n Applying
Bayes
theorem
P (yi = 1|yi , . . . , yi )
1 J j
P (yi |yi = 1) P (yi = 1)
log 1 J
= log j
+ log
P (yi = 0|yi , . . . , yi ) j P (yi |yi = 0) P (yi = 0)
posterior
likelihood
prior
28. +
Aggrega0ng
annota0ons
[Snow
et
al.,
2008]
j j
n How
to
es0mate P (yi |y
=
1)
and
P
(y
i
|y
i
=
0)
?
i
n Gold
standard:
n Some
objects
have
known
labels
n Ask
to
annotate
these
objects
n Compute
empirical
p.m.f.
for
object(s)
with
known
labels
Number of correct annotations
P (y j = 1|y = 1) =
Number of annotations of object with label = 1
n Compute
the
performance
of
annotator
j
(independent
from
the
object)
j j j
P (y1 |y1 = 1) = P (y2 |y2 = 1) = . . . = P (yI |yI = 1) = P (y j |y = 1)
29. +
Aggrega0ng
annota0ons
[Snow
et
al.,
2008]
n Each
annotator
vote
is
weighted
by
the
log-‐likelihood
ra0o
for
their
given
response
(Naïve
Bayes)
n More
reliable
annotators
are
weighted
more
P (yi = 1|yi , . . . , yi )
1 J j
P (yi |yi = 1) P (yi = 1)
log 1 J
= log j
+ log
P (yi = 0|yi , . . . , yi ) j P (yi |yi = 0) P (yi = 0)
n Issue:
Obtaining
a
gold
standard
is
costly!
30. +
Aggrega0ng
annota0ons
[Kumar
and
Lease,
2011]
Figure 1: p1:w ∼U(0.6, 1.0). With very accurate annotators, generating multiple labels (to improve consensus
label accuracy) provides little benefit. Instead, labeling effort is better spent single labeling more examples.
n With
very
accurate
annotators,
it
is
berer
to
label
more
examples
once
pj ∼ U (0.6, 1.0)
Figure 2: p1:w ∼U(0.4, 0.6). With very noisy annotators, single labeling yields such poor training data that
n With
very
noisy
annotators,
aggrega0ng
labels
helps,
if
annotator
there is no benefit from labeling more examples (i.e. a flat learning rate). MV just aggregates this noise to
produce more ∼U(0.6, 1.0). With very accurate annotators, generating multiple labels (to improve consensus
Figure 1: p1:w noise. In contrast, by modeling worker accuracies and weighting their labels appropriately,
accuracies
are
taken
into
account
label accuracy) provides little benefit. Instead, labeling effort is better spent single labeling more examples.
NB can improve consensus labeling accuracy (and thereby classifier accuracy).
pj ∼ U (0.3, 0.7)
Figure 2: p1:w ∼U(0.4, 0.6). With very noisy annotators, single labeling yields such poor training data that
Figure 3: p1:w ∼U(0.3, 0.7). With greater variance in accuracies vs. Figure 2, NB further improves.
SL:
Single
Labeling,
MV:
Majority
Vo0ng;
NB:
Naïve
Bayes
there is no benefit from labeling more examples (i.e. a flat learning rate). MV just aggregates this noise to
produce more noise. In contrast, by modeling worker accuracies and weighting their labels appropriately,
NB can improve consensus labeling accuracy (and thereby classifier accuracy).