This document discusses different types and sources of data as well as tools for managing and analyzing data. It covers numerical and textual data that comes from various sources like existing statistics, experiments, surveys, and all around us. It also compares human and computer memory, noting computers can recall more contacts or themes faster. The document then outlines spreadsheet, online data visualization, concept mapping, and wiki tools to sort, filter, display, and communicate data visually to determine important information and connections. It concludes by asking the reader to develop a lesson involving data analysis using one of the tools.
Presentation for INTC 3610 Technology for Educators course. Deals with the use of technology tools to help learners organize, analyze and present data, both numerical and text.
Feb.2016 Demystifying Digital Humanities - Workshop 3Paige Morgan
Slides from Demystifying Digital Humanities Workshop 3: Data Wrangling: Programming on the Whiteboard -- taught at the University of Miami Libraries in February, 2016
A conference report of SemTechBiz 2013 in San Francisco, from a datamining and knowledge-management point of view. It covers several companies with their automatic algorithms to extract data from cleverly discovered crowed-curated data sources, or using UI tools to leverage existing utility to lure user help mark up the data...
AI, Knowledge Representation and Graph Databases - Key Trends in Data ScienceOptum
Knowledge Representation is a key focus for most modern AI texts. Many AI experts feel that over half of their work is understanding how to find the right knowledge structures to build intelligent agents that can continuously learn and respond to changing events in their world. In 2012, a paper published by Google started a consolidation of the many diverse forms of knowledge representation into a single general-purpose structure called a labeled property graph.
This talk will describe the key events behind this movement and show how a new generation of data scientist will be needed to build and maintain corporate knowledge graphs that contain a uniform, normalized and highly connected data sets for used by researchers and intelligent agents. We will also discuss the challenges of transferring siloed project-knowledge to reusable structures.
Most of the time, when you hear about Artificial Intelligence (AI), people talk about new algorithms or even the computation power needed to train them. But Data is one of the most important factors in AI.
Presentation for INTC 3610 Technology for Educators course. Deals with the use of technology tools to help learners organize, analyze and present data, both numerical and text.
Feb.2016 Demystifying Digital Humanities - Workshop 3Paige Morgan
Slides from Demystifying Digital Humanities Workshop 3: Data Wrangling: Programming on the Whiteboard -- taught at the University of Miami Libraries in February, 2016
A conference report of SemTechBiz 2013 in San Francisco, from a datamining and knowledge-management point of view. It covers several companies with their automatic algorithms to extract data from cleverly discovered crowed-curated data sources, or using UI tools to leverage existing utility to lure user help mark up the data...
AI, Knowledge Representation and Graph Databases - Key Trends in Data ScienceOptum
Knowledge Representation is a key focus for most modern AI texts. Many AI experts feel that over half of their work is understanding how to find the right knowledge structures to build intelligent agents that can continuously learn and respond to changing events in their world. In 2012, a paper published by Google started a consolidation of the many diverse forms of knowledge representation into a single general-purpose structure called a labeled property graph.
This talk will describe the key events behind this movement and show how a new generation of data scientist will be needed to build and maintain corporate knowledge graphs that contain a uniform, normalized and highly connected data sets for used by researchers and intelligent agents. We will also discuss the challenges of transferring siloed project-knowledge to reusable structures.
Most of the time, when you hear about Artificial Intelligence (AI), people talk about new algorithms or even the computation power needed to train them. But Data is one of the most important factors in AI.
Learning data analytics is a rewarding journey that opens doors to diverse career opportunities. With determination, the right resources, and continuous practice, you can learn Data Analytics and become a proficient data analyst.
To learn more about Data Analytics, click on this blog link: https://www.syntaxtechs.com/blog/what-is-data-analytics
Review the steps involved in the research process (identifying the research problem, reviewing the literature, planning/design, collecting, analyzing storing & sharing data, quality control).
Identify the latest technology tools and apps (mobile, cloud-based, web-based) available for Lecturers and Librarians to utilize at each stage of the research process.
Introduce a range of emerging technology tools to enable researchers to conceptualize, conduct and complete research projects.
The modern library web environment consists of multiple content sources and applications that perform essential functions that often overlap and could potentially create a fractured user experience. For example, content in a library’s Drupal website may be replicated in LibGuides or WordPress blogs. Search functionality in a discovery platform may be replicated in a federated search tool or the ILS OPAC. This presentation provides tips, tackles technical and political challenges to building a single web experience for users, discusses solutions and use of APIs (application programming interfaces), provides concrete examples, and more.
Let's see how some consumer and enterprise technologies are coming together to help organizations with expert location, knowledge management, portable identity, communities of practice, and re-imagining email as a collaboration tool.
Learning data analytics is a rewarding journey that opens doors to diverse career opportunities. With determination, the right resources, and continuous practice, you can learn Data Analytics and become a proficient data analyst.
To learn more about Data Analytics, click on this blog link: https://www.syntaxtechs.com/blog/what-is-data-analytics
Review the steps involved in the research process (identifying the research problem, reviewing the literature, planning/design, collecting, analyzing storing & sharing data, quality control).
Identify the latest technology tools and apps (mobile, cloud-based, web-based) available for Lecturers and Librarians to utilize at each stage of the research process.
Introduce a range of emerging technology tools to enable researchers to conceptualize, conduct and complete research projects.
The modern library web environment consists of multiple content sources and applications that perform essential functions that often overlap and could potentially create a fractured user experience. For example, content in a library’s Drupal website may be replicated in LibGuides or WordPress blogs. Search functionality in a discovery platform may be replicated in a federated search tool or the ILS OPAC. This presentation provides tips, tackles technical and political challenges to building a single web experience for users, discusses solutions and use of APIs (application programming interfaces), provides concrete examples, and more.
Let's see how some consumer and enterprise technologies are coming together to help organizations with expert location, knowledge management, portable identity, communities of practice, and re-imagining email as a collaboration tool.
A description of the NETS*T and 21st Century Skills framework being used as criteria for grading mini-projects submitted for INTC 3610 Technology for Educators
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
"Impact of front-end architecture on development cost", Viktor Turskyi
Intc 3610 data 2012
1.
2. Data comes in two basic forms:
Numerical – numbers
Textual – words
Data comes from many sources
Let’s brainstorm a few…
First open a program called
Inspiration
Now name some sources
3. Sources include:
Existing statistical sites online
▪ Real time data sites
Repositories of events like encyclopedias and
books
Our own experiments
Surveys and questionnaires
All around us in all kinds of forms
▪ Text
▪ Visuals
4. What is difference between your memory and
a computer when it comes to managing data?
▪ How many e-mail contacts can you recall from your
address book?
▪ If you read long speech, how long does it take you to
determine the key themes?
▪ If you take notes from a lecture, how do you determine
the most important information?
5. Spreadsheet programs like Microsoft Excel
Keep track of numerical data
Allows you to sort or filter data
Allows you create formulas to answer questions
Allows you to display and communicate data visually
through charts and graphs
Also has a way to grab real-time data (search Google for
“Real time data sites”)
Online sites such as Many-Eyes, Google Earth or
Google Public Data let you visually represent
numerical data in powerful ways
6. Inspiration and Kidspiration let you
link text
Draws out connections
Visually shows the important data
Add images to enhance understanding
Many-Eyes and Wordle are online
tools to help express text data
visually for analysis
Wikis are also a way to capture text
data in the form of an encyclopedic
form
7. Develop a lesson/activity relevant to your
discipline, topic, and grade level that involves
data analysis using one of the tools.