The document summarizes prior work on poselets and attributes for describing people in images. It discusses how poselets were introduced in 2009 and have since been applied to tasks like segmentation, action recognition, and categorization. It also reviews over 20 prior works from 1990-2011 on discovering and learning attributes from text, images, motion capture data, and for tasks such as image retrieval, active learning, and determining gender. The goal of the current work is to extract attributes from images using a poselet-based approach.
Prototyping Ubiquitous Multi-Agent Systems: A Generic Domain Approach with JasonCarlos Eduardo Pantoja
Presented at 15th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS) at Polytechnic of Porto - Porto (Portugal).
21st June, 2017
Instagram: @prof.pantoja
Prototyping Ubiquitous Multi-Agent Systems: A Generic Domain Approach with JasonCarlos Eduardo Pantoja
Presented at 15th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS) at Polytechnic of Porto - Porto (Portugal).
21st June, 2017
Instagram: @prof.pantoja
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Describing People: A Poselet-based approach to attribute classification
1. Describing People: A Poselet-Based
Approach to Attribute Classification
Lubomir Bourdev1,2
Subhransu Maji1
Jitendra Malik1
1EECS U.C. Berkeley 2Adobe Systems Inc.
7. Prior work on Poselets
• Introduced by [Bourdev and Malik, ICCV09]
• Detection with poselets [Bourdev et al, ECCV10]
• Applications
• Segmentation [Brox et al, ECCV10] [Maire et al, ICCV 11]
• Actions [Yang et al, CVPR10] [Maji et al, CVPR11] [Yao et al, ICCV11]
• Human parsing [Wang et al, CVPR11]
• Semantic contours [Hariharan et al, ICCV11]
• Subordinate level categorization [Farrell et al, ICCV11]
8. Prior work on Poselets
• Introduced by [Bourdev and Malik, ICCV09]
• Detection with poselets [Bourdev et al, ECCV10]
• Applications
• Segmentation [Brox et al, ECCV10] [Maire et al, ICCV 11]
• Actions [Yang et al, CVPR10] [Maji et al, CVPR11] [Yao et al, ICCV11]
• Human parsing [Wang et al, CVPR11]
• Semantic contours [Hariharan et al, ICCV11]
• Subordinate level categorization [Farrell et al, ICCV11]
9. Prior work on Poselets
• Introduced by [Bourdev and Malik, ICCV09]
• Detection with poselets [Bourdev et al, ECCV10]
• Applications
• Segmentation [Brox et al, ECCV10] [Maire et al, ICCV 11]
• Actions [Yang et al, CVPR10] [Maji et al, CVPR11] [Yao et al, ICCV11]
• Human parsing [Wang et al, CVPR11]
• Semantic contours [Hariharan et al, ICCV11]
• Subordinate level categorization [Farrell et al, ICCV11]
10. Prior work on Poselets
• Introduced by [Bourdev and Malik, ICCV09]
• Detection with poselets [Bourdev et al, ECCV10]
• Applications
• Segmentation [Brox et al, ECCV10] [Maire et al, ICCV 11]
• Actions [Yang et al, CVPR10] [Maji et al, CVPR11] [Yao et al, ICCV11]
• Human parsing [Wang et al, CVPR11]
• Semantic contours [Hariharan et al, ICCV11]
• Subordinate level categorization [Farrell et al, ICCV11]
11. Prior work on Poselets
• Introduced by [Bourdev and Malik, ICCV09]
• Detection with poselets [Bourdev et al, ECCV10]
• Applications
• Segmentation [Brox et al, ECCV10] [Maire et al, ICCV 11]
• Actions [Yang et al, CVPR10] [Maji et al, CVPR11] [Yao et al, ICCV11]
• Human parsing [Wang et al, CVPR11]
• Semantic contours [Hariharan et al, ICCV11]
• Subordinate level categorization [Farrell et al, ICCV11]
12. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
13. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11][Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
14. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
15. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
16. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
17. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
18. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al,
CVPR10][Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson
el al, ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al,
CVPR11] [Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al,
ICCV11] [Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
19. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
20. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
21. Prior work on Attributes
Attributes as intermediate parts Attributes and actions
Discovering attributes from text Active learning with attributes
Discovering attributes from images Attributes of people
Attributes from motion capture Gender attribute
Joint learning of classes & attributes
Image retrieval with attributes
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
22. Prior work on Attributes
Attributes as intermediate parts Image retrieval with attributes
Discovering attributes from text Attributes and actions
Discovering attributes from images Active learning with attributes
Attributes from motion capture Attributes of people
Joint learning of classes & attributes Gender attribute
[Cottrell and Medcalfe, NIPS90] [Golomb et al, NIPS90] [Moghaddam& Yang, PAMI02]
[Ferrari &Zisserman, NIPS07] [Kumar et al, ECCV08] [Gallagher and Chen, CVPR08]
[Cao et al, ACM08] [Lampert et al, CVPR09] [Farhadi et al, CVPR 09] [Wang et al,
BMVC09] [Wang and Forsyth, ICCV09] [Kumar et al, ICCV09] [Farhadi et al, CVPR10]
[Berg et al, ECCV10] [Wang and Mori, ECCV10] [Sigal et al, ECCV10] [Branson el al,
ECCV10] [Hwang et al, CVPR11] [Parikh and Grauman, CVPR11] [Douze et al, CVPR11]
[Kovashka et al, ICCV11] [Liu et al, CVPR11] [Qiu et al, ICCV11] [Yao et al, ICCV11]
[Dhar et al, CVPR11] [Parikh and Grauman, ICCV11] [Siddiquie et al, CVPR11]
32. Training poselet classifiers
Residual 0.15 0.20 0.10 0.85 0.15 0.35
Error:
1. Given a seed patch
2. Find the closest patch for every other person
3. Sort them by residual error
4. Threshold them
33. Training poselet classifiers
1. Given a seed patch
2. Find the closest patch for every other person
3. Sort them by residual error
4. Threshold them
5. Use them as positive training examples to train
a linear SVM with HOG features
47. Our dataset
• Source: VOC 2010 trainval for Person + H3D
• ~8000 annotations (4000 train + 4000 test)
• 9 binary attributes specified by 5 independent annotators via AMT
• Ground truth label: If 4 of the 5 agree
• Dataset will be made publicly available
52. Our baseline
• Canny-modulated HOG with SPM kernel [Lazebnik et al CVPR06]
• To help the baseline trained separate SPM for four viewpoints:
Full view Head zoom Upper body Legs
• For each attribute we pick the best SPM as our baseline
53. Precision/recall on our test set
Label - ---
frequency
SPM
___
No ___
context
Full ___
Model
54. State-of-the-art Gender Recognition
• We outperform Cognitec (top-notch face
recognizer)
• We outperform any gender recognizer based on
frontal faces (are there others?)
• 61% of our test have frontal faces.
• Even with perfect classification of frontal faces,
max AP=80.5% vs. our AP of 82.4%
55. Confusions
long hair
Men most confused as women
Women most confused as men baseball hat hair hidden
56. annotation
Non-T-shirt most confused to be T-shirt errors
Short pants most confused to be long pants
Are these pants short? wrong person occlusion
60. How poselets help in high-level vision
The image is a complex Poselets decouple pose and
function of the viewpoint, camera view from
pose, appearance, etc. appearance
61. Google “poselets” to get:
• The set of published poselet papers
• H3D data set + Matlab tools
• Java3D annotation tool + video tutorial
• Matlab code to detect people using poselets
• Our latest trained poselets
62. Poselets website
Failure mode
http://eecs.berkeley.edu/~lbourdev/poselets hair,
“A man with with long
“A woman short
“Aglasses,with short hair,
“Aperson short short hair,
man with sleeves and
hair and long sleeves”
• The set of published poseletno hat pants” sleeves
glasses, short sleeves”
papers and long
long
• H3D data set + Matlab toolsand person with
“A shorts”
Java3D annotation tool + video tutorial
longcomputer vision
“A pants”
•
• Matlab code to detect people using poselets
professor who likes
• Our latest trained poselets
machine learning”