The document summarizes Noah Goodman's talk on using mathematical principles to understand thought. It discusses how thought is productive through compositional representations and probabilistic inference. Thought combines basic elements in infinite combinations, like words into sentences. Thinking involves probabilistic inference over mental representations to explain observations and plan actions. The talk suggests thought may operate via a probabilistic language of thought based on probabilistic lambda calculus.
URSW 2009 - Probabilistic Ontology and Knowledge Fusion for Procurement Fraud...Rommel Carvalho
Presentation given by Rommel Carvalho at the 5th Uncertainty Reasoning for the Semantic Web Workshop at the 8th International Semantic Web Conference in 2009.
Paper: Probabilistic Ontology and Knowledge Fusion for Procurement Fraud Detection in Brazil
Abstract: To cope with society’s demand for transparency and corruption prevention, the Brazilian Federal General Comptroller (CGU) has carried out a number of actions, including: awareness campaigns aimed at the private sector; campaigns to educate the public; research initiatives; and regular inspections and audits of municipalities and states. Although CGU has collected information from hundreds of different sources - Revenue Agency, Federal Police, and others - the process of fusing all this data has not been efficient enough to meet the needs of CGU’s decision makers. Therefore, it is natural to change the focus from data fusion to knowledge fusion. As a consequence, traditional syntactic methods must be augmented with techniques that represent and reason with the semantics of databases. However, commonly used approaches fail to deal with uncertainty, a dominant characteristic in corruption prevention. This paper presents the use of Probabilistic OWL (PR-OWL) to design and test a model that performs information fusion to detect possible frauds in procurements involving Federal money. To design this model, a recently developed tool for creating PR-OWL ontologies was used with support from PR-OWL specialists and careful guidance from a fraud detection specialist from CGU.
UnBBayes is a probabilistic network framework written in Java. It has both a GUI and an API with inference, sampling, learning and evaluation. It supports BN, ID, MSBN, OOBN, HBN, MEBN/PR-OWL, structure, parameter and incremental learning.
The overview is presented through a slides potpourri from different presentations the Artificial Intelligence Group (GIA) from University of Brasilia (UnB) has given since 1999. It covers BN, ID, MSBN, UnBBayes Server, and MEBN.
This presentation was given by Rommel Carvalho when he started his PhD at George Mason University on the Friday seminar called Krypton (http://krypton.c4i.gmu.edu/).
Probabilistic Ontology: Representation and Modeling MethodologyRommel Carvalho
Oral Defense of Doctoral Dissertation
Volgenau School of Engineering, George Mason University
Rommel Novaes Carvalho
Bachelor of Science, University of Brasília, Brazil, 2003
Master of Science, University of Brasília, Brazil, 2008
Probabilistic Ontology: Representation and Modeling Methodology
Tuesday, June 28, 2011, 2:00pm -- 4:00pm
Nguyen Engineering Building, Room 4705
Committee
Kathryn Laskey, Chair
Paulo Costa
Kuo-Chu Chang
David Schum
Larry Kerschberg
Fabio Cozman
Abstract
The past few years have witnessed an increasingly mature body of research on the Semantic Web (SW), with new standards being developed and more complex problems being addressed. As complexity increases in SW applications, so does the need for principled means to cope with uncertainty in SW applications. Several approaches addressing uncertainty representation and reasoning in the SW have emerged. Among these is Probabilistic Web Ontology Language (PR-OWL), which provides Web Ontology Language (OWL) constructs for representing Multi-Entity Bayesian Network (MEBN) theories. However, there are several important ways in which the initial version PR-OWL 1.0 fails to achieve full compatibility with OWL. Furthermore, although there is an emerging literature on ontology engineering, little guidance is available on the construction of probabilistic ontologies.
This research proposes a new syntax and semantics, defined as PR-OWL 2.0, which improves compatibility between PR-OWL and OWL in two important respects. First, PR-OWL 2.0 follows the approach suggested by Poole et al. to formalizing the association between random variables from probabilistic theories with the individuals, classes and properties from ontological languages such as OWL. Second, PR-OWL 2.0 allows values of random variables to range over OWL datatypes.
To address the lack of support for probabilistic ontology engineering, this research describes a new methodology for modeling probabilistic ontologies called Uncertainty Modeling Process for Semantic Technologies (UMP-ST). To better explain the methodology and to verify that it can be applied to different scenarios, this dissertation presents step-by-step constructions of two different probabilistic ontologies. One is used for identifying frauds in public procurements in Brazil and the other is used for identifying terrorist threats in the maritime domain. Both use cases demonstrate the advantages of PR-OWL 2.0 over its predecessor.
URSW 2009 - Probabilistic Ontology and Knowledge Fusion for Procurement Fraud...Rommel Carvalho
Presentation given by Rommel Carvalho at the 5th Uncertainty Reasoning for the Semantic Web Workshop at the 8th International Semantic Web Conference in 2009.
Paper: Probabilistic Ontology and Knowledge Fusion for Procurement Fraud Detection in Brazil
Abstract: To cope with society’s demand for transparency and corruption prevention, the Brazilian Federal General Comptroller (CGU) has carried out a number of actions, including: awareness campaigns aimed at the private sector; campaigns to educate the public; research initiatives; and regular inspections and audits of municipalities and states. Although CGU has collected information from hundreds of different sources - Revenue Agency, Federal Police, and others - the process of fusing all this data has not been efficient enough to meet the needs of CGU’s decision makers. Therefore, it is natural to change the focus from data fusion to knowledge fusion. As a consequence, traditional syntactic methods must be augmented with techniques that represent and reason with the semantics of databases. However, commonly used approaches fail to deal with uncertainty, a dominant characteristic in corruption prevention. This paper presents the use of Probabilistic OWL (PR-OWL) to design and test a model that performs information fusion to detect possible frauds in procurements involving Federal money. To design this model, a recently developed tool for creating PR-OWL ontologies was used with support from PR-OWL specialists and careful guidance from a fraud detection specialist from CGU.
UnBBayes is a probabilistic network framework written in Java. It has both a GUI and an API with inference, sampling, learning and evaluation. It supports BN, ID, MSBN, OOBN, HBN, MEBN/PR-OWL, structure, parameter and incremental learning.
The overview is presented through a slides potpourri from different presentations the Artificial Intelligence Group (GIA) from University of Brasilia (UnB) has given since 1999. It covers BN, ID, MSBN, UnBBayes Server, and MEBN.
This presentation was given by Rommel Carvalho when he started his PhD at George Mason University on the Friday seminar called Krypton (http://krypton.c4i.gmu.edu/).
Probabilistic Ontology: Representation and Modeling MethodologyRommel Carvalho
Oral Defense of Doctoral Dissertation
Volgenau School of Engineering, George Mason University
Rommel Novaes Carvalho
Bachelor of Science, University of Brasília, Brazil, 2003
Master of Science, University of Brasília, Brazil, 2008
Probabilistic Ontology: Representation and Modeling Methodology
Tuesday, June 28, 2011, 2:00pm -- 4:00pm
Nguyen Engineering Building, Room 4705
Committee
Kathryn Laskey, Chair
Paulo Costa
Kuo-Chu Chang
David Schum
Larry Kerschberg
Fabio Cozman
Abstract
The past few years have witnessed an increasingly mature body of research on the Semantic Web (SW), with new standards being developed and more complex problems being addressed. As complexity increases in SW applications, so does the need for principled means to cope with uncertainty in SW applications. Several approaches addressing uncertainty representation and reasoning in the SW have emerged. Among these is Probabilistic Web Ontology Language (PR-OWL), which provides Web Ontology Language (OWL) constructs for representing Multi-Entity Bayesian Network (MEBN) theories. However, there are several important ways in which the initial version PR-OWL 1.0 fails to achieve full compatibility with OWL. Furthermore, although there is an emerging literature on ontology engineering, little guidance is available on the construction of probabilistic ontologies.
This research proposes a new syntax and semantics, defined as PR-OWL 2.0, which improves compatibility between PR-OWL and OWL in two important respects. First, PR-OWL 2.0 follows the approach suggested by Poole et al. to formalizing the association between random variables from probabilistic theories with the individuals, classes and properties from ontological languages such as OWL. Second, PR-OWL 2.0 allows values of random variables to range over OWL datatypes.
To address the lack of support for probabilistic ontology engineering, this research describes a new methodology for modeling probabilistic ontologies called Uncertainty Modeling Process for Semantic Technologies (UMP-ST). To better explain the methodology and to verify that it can be applied to different scenarios, this dissertation presents step-by-step constructions of two different probabilistic ontologies. One is used for identifying frauds in public procurements in Brazil and the other is used for identifying terrorist threats in the maritime domain. Both use cases demonstrate the advantages of PR-OWL 2.0 over its predecessor.
Probabilistic Abductive Logic Programming using Possible WorldsFulvio Rotella
Reasoning in very complex contexts often requires purely deductive reasoning to be supported by a variety of techniques that can cope with incomplete data. Abductive inference allows to guess information that has not been explicitly observed. Since there are many explanations for such guesses, there is the need for assigning a probability to each one. This work exploits logical abduction to produce multiple explanations consistent with a given background knowledge and defines a strategy to prioritize them using their chance of being true. Another novelty is the introduction of probabilistic integrity constraints rather than hard ones. Then we propose a strategy that learns model and parameters from data and exploits our Probabilistic Abductive Proof Procedure to classify never-seen instances. This approach has been tested on some standard datasets showing that it improves accuracy in presence of corruptions and missing data.
Presentation -Intelligence Enhancer and Genius 3.0 智能增长以及天才3.0Hang Wu
Traditional genius usually have the problems of either genetic abnormalities or unusual social political behaviors. These what we called the Genius 1.0 and Genius 2.0. The new kind of genius, using technology to boost their intelligence while maintaining their humanity, is proposed as the next evolution of human being.
传统的天才即使在基因上拥有很大的优势,因为社会政治的原因,导致了他们无法被世界所接受。因此,我设计出了天才3.0的概念,用来解释利用现代神经工程学提升大脑的人。
Alexandra Basford, InCoB 2011: A Journal’s Perspective on Data Standards and ...GigaScience, BGI Hong Kong
Alexandra Basford's talk in the curation session at the InCoB meeting in Kuala Lumpar, 30/11/11 on: GigaScience: A Journal’s Perspective on Data Standards and Biocuration
The Sixth Sense is the Basic Latest Technology. It is the a wearable gestural interface that augments the physical world around us with digital information
The Noetic perspective (from Greek: noetikos- mental; nous- mind) identifies the [human] mind as the nexus of the future evolution of humanity. At present, human evolution is a mental process rather than biological or technological process.
The Noetic model describes mind as a relation generating complex system arising as a product of biological evolution and manifesting certain defining characteristics such as systemic closure, self reference, plasticity, etc. This model aims to integrate a systemic view with the mental constructs of the subjective plane. According to the Noetic model, human identity is a dynamic constructive process that brings forth the human observer as the subject of its perceptive and mental states. This process is identified as mind. Images and narratives are the elements encompassing the experiential and mental aspects of the identity process as they appear to the human observer.
The idea of mind as the theater of evolutionary processes is further explored: Mind as a complex system can essentially be disassociated from the historical conditions of its emergence; therefore it is virtually unbound in its evolutionary potential. This has deep implications on the understanding of human nature and the human condition. Finally, the ideas of openness and freedom beyond utility are proposed as futuristic directives of consciously guided evolution of mind.
Engineering Ambient Intelligence Systems using Agent TechnologyNikolaos Spanoudakis
This presentation was given at the nectar session of the 9th Hellenic Conference on Artificial Intelligence (SETN 2016) that took place on May18th- 20th in Thessaloniki.
It is about applying an agent-oriented software engineering (AOSE) methodology, i.e. the Agent Systems Engineering Methodology (ASEME) for building intelligent systems. We present it along with a case study in the Ambient Intelligence (AmI) Application Domain. We discuss the challenges, the ASEME Methodology, the System Architecture and our results.
Using model-based statistical inference to learn about evolutionErick Matsen
These are the slides I used for my promotion talk to associate member at the Fred Hutch. My abstract follows:
Our knowledge about much of biology is indirect: rather than directly observing a process we observe some noisy result of that process. In addition, we almost never have a complete description mapping underlying processes to observations. Given these challenges, what framework can we use to use to understand biology?
In this talk I will describe the use of probabilistic models to learn about evolution from biological data. Starting with the more familiar terrain of solving equations and performing integration in math, I will describe how these same concepts are generalized to the probabilistic setting. I will illustrate how this works in practice with examples from our current research on reconstruction of evolutionary trees and maturation of antibody-making B cells.
Image generation. Gaussian models for human faces, limits and relations with linear neural networks. Generative adversarial networks (GANs), generators, discrinators, adversarial loss and two player games. Convolutional GAN and image arithmetic. Super-resolution. Nearest-neighbor, bilinear and bicubic interpolation. Image sharpening. Linear inverse problems, Tikhonov and Total-Variation regularization. Super-Resolution CNN, VDSR, Fast SRCNN, SRGAN, perceptual, adversarial and content losses. Style transfer: Gatys model, content loss and style loss.
Probabilistic Abductive Logic Programming using Possible WorldsFulvio Rotella
Reasoning in very complex contexts often requires purely deductive reasoning to be supported by a variety of techniques that can cope with incomplete data. Abductive inference allows to guess information that has not been explicitly observed. Since there are many explanations for such guesses, there is the need for assigning a probability to each one. This work exploits logical abduction to produce multiple explanations consistent with a given background knowledge and defines a strategy to prioritize them using their chance of being true. Another novelty is the introduction of probabilistic integrity constraints rather than hard ones. Then we propose a strategy that learns model and parameters from data and exploits our Probabilistic Abductive Proof Procedure to classify never-seen instances. This approach has been tested on some standard datasets showing that it improves accuracy in presence of corruptions and missing data.
Presentation -Intelligence Enhancer and Genius 3.0 智能增长以及天才3.0Hang Wu
Traditional genius usually have the problems of either genetic abnormalities or unusual social political behaviors. These what we called the Genius 1.0 and Genius 2.0. The new kind of genius, using technology to boost their intelligence while maintaining their humanity, is proposed as the next evolution of human being.
传统的天才即使在基因上拥有很大的优势,因为社会政治的原因,导致了他们无法被世界所接受。因此,我设计出了天才3.0的概念,用来解释利用现代神经工程学提升大脑的人。
Alexandra Basford, InCoB 2011: A Journal’s Perspective on Data Standards and ...GigaScience, BGI Hong Kong
Alexandra Basford's talk in the curation session at the InCoB meeting in Kuala Lumpar, 30/11/11 on: GigaScience: A Journal’s Perspective on Data Standards and Biocuration
The Sixth Sense is the Basic Latest Technology. It is the a wearable gestural interface that augments the physical world around us with digital information
The Noetic perspective (from Greek: noetikos- mental; nous- mind) identifies the [human] mind as the nexus of the future evolution of humanity. At present, human evolution is a mental process rather than biological or technological process.
The Noetic model describes mind as a relation generating complex system arising as a product of biological evolution and manifesting certain defining characteristics such as systemic closure, self reference, plasticity, etc. This model aims to integrate a systemic view with the mental constructs of the subjective plane. According to the Noetic model, human identity is a dynamic constructive process that brings forth the human observer as the subject of its perceptive and mental states. This process is identified as mind. Images and narratives are the elements encompassing the experiential and mental aspects of the identity process as they appear to the human observer.
The idea of mind as the theater of evolutionary processes is further explored: Mind as a complex system can essentially be disassociated from the historical conditions of its emergence; therefore it is virtually unbound in its evolutionary potential. This has deep implications on the understanding of human nature and the human condition. Finally, the ideas of openness and freedom beyond utility are proposed as futuristic directives of consciously guided evolution of mind.
Engineering Ambient Intelligence Systems using Agent TechnologyNikolaos Spanoudakis
This presentation was given at the nectar session of the 9th Hellenic Conference on Artificial Intelligence (SETN 2016) that took place on May18th- 20th in Thessaloniki.
It is about applying an agent-oriented software engineering (AOSE) methodology, i.e. the Agent Systems Engineering Methodology (ASEME) for building intelligent systems. We present it along with a case study in the Ambient Intelligence (AmI) Application Domain. We discuss the challenges, the ASEME Methodology, the System Architecture and our results.
Using model-based statistical inference to learn about evolutionErick Matsen
These are the slides I used for my promotion talk to associate member at the Fred Hutch. My abstract follows:
Our knowledge about much of biology is indirect: rather than directly observing a process we observe some noisy result of that process. In addition, we almost never have a complete description mapping underlying processes to observations. Given these challenges, what framework can we use to use to understand biology?
In this talk I will describe the use of probabilistic models to learn about evolution from biological data. Starting with the more familiar terrain of solving equations and performing integration in math, I will describe how these same concepts are generalized to the probabilistic setting. I will illustrate how this works in practice with examples from our current research on reconstruction of evolutionary trees and maturation of antibody-making B cells.
Image generation. Gaussian models for human faces, limits and relations with linear neural networks. Generative adversarial networks (GANs), generators, discrinators, adversarial loss and two player games. Convolutional GAN and image arithmetic. Super-resolution. Nearest-neighbor, bilinear and bicubic interpolation. Image sharpening. Linear inverse problems, Tikhonov and Total-Variation regularization. Super-Resolution CNN, VDSR, Fast SRCNN, SRGAN, perceptual, adversarial and content losses. Style transfer: Gatys model, content loss and style loss.
I am Nelson. I am a Probability Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from, City University of London.
I have been helping students with their homework for the past 5 years. I solve assignments related to Probability. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Probability Assignments.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
How WE create I - Heather Schlegel - H+ Summit @ HarvardHumanity Plus
Heather Schlegel
VP of Product Management, Debtmarket
How WE create I:
Post-Human Identity, Privacy and Self-Value
Science and technology let you to create the person you want to be. How does the technology we create today enable future selves? What is the impact on identity creation, individual privacy and self-value?
Heather Schlegel is a futurist, technologist, and cacophonist. For more than 12 years she has helped build innovative Internet products in Silicon Valley and has more than 50 product launches to her name. Schlegel is currently the head of product development at DebtMarket, a financial start-up in Los Angeles. Her research projects include disruptive technology in financial markets: lending, alternate/virtual currencies and transactions; long-term product adoption for innovative technologies and positive wildcards. Schlegel is primarily known by her online moniker, heathervescent, where she explores the intersection of technology, culture and identity.
Superconducting Quantum Circuits That Learn - Geordie Rose - H+ Summit @ HarvardHumanity Plus
Geordie Rose
D-Wave Systems Inc.
Special purpose superconducting quantum processors for disruptively accelerating machine learning
Any system that could be considered intelligent must be able to learn. Unfortunately teaching machines how to learn in a generalizable way – so-called minimally supervised or unsupervised learning – is an extremely hard problem. While much progress has been made in understanding how we might do this – for example using deep belief networks – all current proposals are extremely computationally intensive. Exercising them in real-world situations is often not possible because of the required computational cost – even for large corporations with access to enormous server farms. Here I present a path to overcoming this problem by running state of the art machine learning algorithms on a revolutionary new processor design, which uses quantum effects to enable a class of algorithms that cannot be run on any conventional processor.
Dr. Geordie Rose is the founder and CTO of D-Wave. He is known as a leading advocate for quantum computing and superconducting processors, and has been invited to speak on these topics in a wide range of venues, including TED, Future in Review and SC.
His innovative and ambitious approach to building quantum computing technology and support infrastructure has received coverage in MIT Technology Review magazine, The Economist, New Scientist, Scientific American and Science magazines, and one of his business strategies was profiled in a Harvard Business School case study.
Dr. Rose holds a Ph.D. in theoretical physics from the University of British Columbia, specializing in quantum effects in materials. While at McMaster University, he graduated first in his class with a B.Eng. in Engineering Physics, specializing in semiconductor engineering.
The Power of Hierarchical Thinking - Ray Kurzweil - H+ Summit @ HarvardHumanity Plus
Ray Kurzweil
The Power of Hierarchical Thinking
What does it mean to understand the brain? Where are we on the roadmap to this goal? What are the effective routes to progress - detailed modeling, theoretical effort, improvement of imaging and computational technologies? What predictions can we make? What are the consequences of materialization of such predictions - social, ethical? Kurzweil will address these questions and examine some of the most common criticisms of the exponential growth of information technology including criticisms from hardware ("Moore's Law will not go on forever"), software ("software is stuck in the mud"), the brain ("the brain is too complicated to understand or replicate"), ontology ("software is not capable of thinking or of consciousness"), and promise versus peril ("biotechnology, nanotechnology, and artificial intelligence are too dangerous").
There is now a grand project comprising at least a hundred thousand scientists and engineers working in diverse ways to understand the best example we have of an intelligent process: the human brain. It is arguably the most important project in the history of the human-machine civilization. The goal of the project is to understand precisely how the human brain works, and then to use these revealed algorithms as a basis for creating even more intelligent machines.
As we learn the algorithms underlying human intelligence, we will similarly be able to engineer it to vastly extend the powers of our intelligence. Indeed this process is already well under way. There are literally hundreds of tasks and activities that used to be the sole province of human intelligence that can now be conducted by computers usually with greater precision and vastly greater scale.
Was it inevitable that a species would evolve that is capable of creating its own evolutionary process in the form of intelligent technology? Kurzweil will argue that it was.
According to my models we are only two decades from fully modeling and simulating the human brain. By the time we finish this reverse-engineering project, we will have computers that are millions of times more powerful than the human brain. These computers will be further amplified by being networked into a vast world wide cloud of computing. The algorithms of intelligence will begin to self-iterate towards ever smarter algorithms.
This is how we will address the grand challenges of humanity such as maintaining a healthy environment, providing for the resources for a growing population including energy, food, and water, overcoming disease, vastly extending human longevity, and overcoming poverty. It is only by extending our intelligence with our intelligent technology that we can handle the scale of complexity to address these challenges.
Ray Kurzweil has been described as "the restless genius" by the Wall Street Journal, and "the ultimate thinking machine" by Forbes. Inc. magazine ranked him #8 among entrepreneurs in the United States, calling him the "rightful heir to Thomas Edison", and PBS included Ray as one of 16 "revolutionaries who made America", along with other inventors of the past two centuries.
As one of the leading inventors of our time, Ray was the principal developer of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition. Ray's web site Kurzweil AI.net has over one million readers.
Among Ray's many honors, he is the recipient of the $500,000 MIT-Lemelson Prize, the world's largest for innovation. In 1999, he received the National Medal of Technology, the nation's highest honor in technology, from President Clinton in a White House ceremony. And in 2002, he was inducted into the National Inventor's
The Rise of Citizen-Scientists in the Eversmarter World - Alex Lightman - H+ ...Humanity Plus
Alex Lightman
Executive Director, Humanity+
The Rise of Citizen-Scientists in the Eversmarter World
Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and "peak everything". Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories. In this talk, H+ Executive Director Alex Lightman will give an introduction and overview of the big picture of H+ the organization, the magazine, and the conference, and how the participants can make the most of their experience and relationships at the conference. The case for ending embargoes and other beaver dams in the rivers of potentially global knowledge will be made. Lightman will offer a vision of a properly functioning Eversmarter world, ending with a call to action to become a citizen-scientist, and a recruiter of other citizen-scientists.
Alex Lightman is the Executive Director of Humanity+ and the chair of the H+ Summit @ Harvard and of the inaugural H+ Summit held December 2009 in Irvine, California. He is a director of Fortune Nest Corporation (Bahrain, Beijing and Beverly Hills, CA) and of Inova Technology. He is an award-winning educator, an inventor with several US patents issued or pending and the author of over 800,000 words, including 12 articles in h+ magazine, and Brave New Unwired World: The Digital Big Bang and The Infinite Internet, the first book on 4G wireless. He has advised NATO, the US Dept. of Defense, and a number of governments on Internet Protocol version 6, the 128-bit successor to the current Internet, IPv4. Lightman's advocacy led to the only Congressional hearings held on US Internet Leadership, conducted by The Government Reform Committee and at which Lightman testified, leading to implementation of Lightman's recommendations to mandate IPv6 for the US government and require IPv6 as part of government information technology contracts. Lightman studied Civil and Environmental Engineering, and graduated from the Massachusetts Institute of Technology in 1983 (Course I-A), and attended graduate school at Harvard's Kennedy School of Government. He lives in Santa Monica, California, where he runs marathons, and attempts his first Ironman triathlon, in the UK, on August 1, 2010.
50 years of Invention and Entrepreneurship - Nolan Bushnell - H+ Summit @ Har...Humanity Plus
The products and services of my life from simple gadgets as a teen to the more sophisticated projects of an adult continue to stoke the creative fires of invention and discovery. My processes of research and execution on a project have been a significant part of successful business formation. In any business many unforeseen occurrences can disrupt any carefully crafted business plan. The objective is to minimize those instances to as few as possible so that it is unlikely to be battling more than one at a time. I will talk about my process of invention and business formation and how it applies to my various companies.
Nolan Bushnell’s career spans over 30 years in which he has made innovations and contributions to several industries. He is best known as the creator of the first digital videogame and the founder of Atari, founder Chuck E. Cheese entertainment restaurant chain, Axlon for interactive toys, Catalyst the first high tech incubator, Etak the first automobile navigation system, ByVideo the first on line shopping system and several others. Through the years Bushnell has given over 2000 speeches on subjects ranging from his companies, the history of video games, the process of innovation, entrepreneurship, intrepreneurship (bringing to market a new project in an old company) and his 10 steps to bring projects to market with no money. His speeches, while being somewhat irreverent to established cliché’s, are humorous high energy and always thought provoking.
He is widely credited with the following innovations and trends:
- The creation of the first commercial digital videogame.
- The acceptance of casual dress in the technical workplace.
- The creation of the Chuck E. Cheese chain of restaurants.
- The creation of the first digital automobile navigation system.
- The creation of the first on line marketing system.
- 3 Simple ways to inject creativity into an organization.
Several of his quotes have entered the mainstream:
- On Ideas: “Anyone who has had a shower has had a good idea--- what separates the winners from the losers is what does the person do after they leave the shower.”
- On Arrogance: “About the time someone thinks the sun shines out their rear-- all that they can be assured of is an illuminated landing area.”
- On Innovation: “Everyone wants innovation until they see it.”
- On Hard Work: “If it were easy to make a million dollars more people would be doing it.”
- The Future: “The world rewards accurate prediction of the future, the best way to be right in your predictions are to make them happen”
- Business Plans: “Anyone can create a success based on everything going correctly—the issue is to be successful even if nothing goes according to plan.
He has received numerous awards including the following:
- New Week 50 Americans that changed the nation.
- Consumer Electronics “Hall of Fame”
- Video “Hall of Fame”
- Restaurant Business “Innovator of the Year”
- Amusement Operators of America “Life time Achievement”
- Distinguished Fellow, University of Utah
- Computer Museum “Hall of Fame”
- Distinguished Leader of Silicon Valley
- The Agenda “Crystal Ball Award”
- Babson College “Distinguished Entrepreneur”
- British Academy of Film and Television Arts (BAFTA) Life Time Achievement Award
The Evolving Data Sphere - David Orban - H+ Summit @ HarvardHumanity Plus
David Orban
Chairman, Humanity+
Advisor, Singularity University
Founder & Chief Evangelist, WideTag, Inc.
Intelligence Augmentation, Decision Power, And The Emerging Data Sphere
Human civilization depends on our ability to manage its increasing complexity. Behaviors, processes, and decisions that in the past were tolerated by the complex adaptive system we call Earth, are now more and more showing unforeseen consequences in unexpected places.
Many of our theories about the workings of the world are hampered in their predictive power by the lack of data, and suffer garbage-in, garbage-out effects. New interconnected sensor networks, fast, and ubiquitous communications, and the parallel power of our massive software systems are the never too soon answer to this need, and promise to revolutionize the way we understand, and act upon the planet.
The data sphere we are building, developing through every traceable action of millions of people, and billions, soon trillions of devices, designs a fine-grained picture of necessary understanding, and empowers us to believe that we can indeed aim to evolve our civilization, and to move it to the next levels of complexity, and achievement of human potential.
David Orban is an entrepreneur and visionary. He is Chairman of Humanity+, Advisor of the Singularity University, a Founder of WideTag, Inc., a high technology start-up company providing the infrastructure for an open Internet of Things. David shapes the strategic vision of its technologies by developing the policies and communication steps necessary to enable constructive progress. He is further a Scientific Advisory Board Member for the Lifeboat Foundation. David cuts across the limits of deep specialization to contribute to the new renaissance. He explains, “My vision is at the crossroads of technology and society as defined by their co-evolution.” David Orban’s personal motto is, “What is the question I should be asking?” This concept is his vehicle to accelerating cycles of invention and innovation in order to build the new world ahead.
Humanity 2020: The Next 10 Years of Human Development - Ramez Naam - H+ Summi...Humanity Plus
Ramez Naam
Humanity 2020: The Next 10 Years of Human Development
The decade between 2010 and 2020 will be a small but significant step in the development of human enhancement technology, with tremendous numbers of new discoveries in genetics fueled by the continuing exponential drop in gene sequencing cost, commercial availability of a new generation of cognitive enhancers and the first plausible aging inhibitors, likely advances in genetic reprogramming of embryos and of mature humans, and continued progress in prosthetics, imaging, sports enhancement, and numerous other areas. Science and technology will have made significant strides in empowering individuals to be smarter, stronger, faster, and longer lived than ever before.
Computer scientist Ramez Naam, author of More Than Human: Embracing the Promise of Human Enhancement, and winner of the 2005 H.G. Wells Award for Contributions to Transhumanism will give a guided tour of the 10 year horizons across the board of human enhancement.
Bryan Bishop
Do-it-yourself Transhuman Tech
This talk will cover the prospects of do-it-yourself transhumanism, do-it-yourself garage biotech and engineering. These topics and more will be explored within the context of open source technology and licensing. In addition, progress on open source DIY lab-on-a-chip devices will be exhibited.
Bryan Bishop is an advocate and developer of do-it-yourself transhumanism and open source hardware. His primary focus is directing a "triple trick" transhumanist team focusing on accelerating trends like the technological singularity, do-it-yourself biology, and open source technologies. In 2008, Bishop became a research assistant at the Automated Design Lab at the University of Texas at Austin. From time to time, if you're lucky, you might find him stealing a few hours of sleep on the lab couch. Lately he spends his waking hours at the recently new hackerspace in Austin, Texas.
You can find him on the web at http://heybryan.org
Transhumanism & Education - Kevin Jain - H+ Summit @ HarvardHumanity Plus
Kevin Jain
Transhumanism & Education
In reviewing the curricula of various Universities, one will find few, if any, classes that meaningfully consider the increasing assimilation of technology with the human. If an education predicated on assumptions of human nature is made without a meaningful consideration of transhumanism, can it remain relevant in a future where technology may render false these very assumptions? How can the question of human enhancement be introduced as a topic of more widespread academic deliberation? This talk will also discuss current efforts in this arena.
Kevin Jain is an undergraduate at Harvard University, and is Founder and President of the Harvard College Future Society, a student organization interested in evaluating the impact of future technologies on the human and humanity. He is the Student H+ Summit Coordinator, and helped organize the H+ 2010 Summit at Harvard. He plans to graduate with a special concentration in Transhumanism.
Can we extract a mind from a plastic-embedded brain? - Kenneth Hayworth - H+ ...Humanity Plus
Ken Hayworth
Can we extract a mind from a plastic-embedded brain?
We now have a good working theory of consciousness – the phenomenal self model (Metzinger 2009), and we have a good understanding of the human cognitive architecture (Anderson 2007) within which this self model is implemented. The key components of this cognitive architecture are declarative memory chunks and productions – thought to be implemented as stable attractors in the neural networks of the cortex and basal ganglia. According to neural network theory, such stable attractors are robustly defined by the synaptic connectivity between neurons. In small pieces of tissue such synaptic connectivity is easily preserved using chemical fixation and embedding in plastic, and it should be relatively easy to adapt these protocols into a surgical procedure performed in hospitals to preserve whole human brains. Such plastic embedded brain tissue can be imaged at the nanometer level using new automated techniques (SBFSEM, FIBSEM, Tape-to-SEM), and we can directly extrapolate these techniques to future ones that will enable all the synaptic connections within a human brain to be mapped allowing a fully accurate simulation of the original preserved mind. In short, we have a complete sketch of how mind uploading will work and we have a mandate to implement emergency brain preservation in hospitals for all who desire access to this future technology.
Kenneth Hayworth, a postdoctoral fellow at Harvard University, is the inventor of several technologies for high-throughput volume imaging of neural circuits at the nanometer scale. He received a PhD in Neuroscience from the University of Southern California for research into how the human visual system encodes spatial relations among objects. Hayworth is a vocal advocate for brain preservation and mind uploading, and runs a website (www.brainpreservation.org) calling for the implementation of an Emergency Glutaraldehyde Perfusion procedure in hospitals, and for the development of a Whole Brain Plastic Embedding procedure which can demonstrate perfect ultrastructure preservation across an entire human brain.
Computation of Things - Justyna Zander, Pieter Mosterman - H+ Summit @ HarvardHumanity Plus
Justyna Zander
Harvard University
Fraunhofer Institute FOKUS
Computation of Things:
Challenges and Solutions for the Needs of Humanity
Presented with Pieter J. Mosterman
The current exponential growth of technologies is providing novel, frequently unimaginable ways of leveraging its applications for human needs. Ubiquitous communication capabilities allow a redefinition of an individual person as one who is becoming an integrated part of the virtual world and vice versa. The challenge of sustainable development of those trends from the perspective of a single human and humanity as such remains unsolved.
In the presented vision various aspects of the sustainability (e.g., the relation between a person’s quality of life, climate change, and social awareness) are highlighted to explore and share how an individual impacts and is impacted by the surrounding. Ultimately a technological solution called Computation of Things (CoTh) is outlined. It allows for a quick and reliable assessment of people’s possible decision paths and how this affects sustainable development on a local and global scale. Forecasting life-path alternatives for a human based on its geographic position (including pollution level, energy usage), activity patterns (including nutrition habits, lifestyle, travelling load, family status, circle of friends, social network, or virtual life), and state patterns (including individual’s DNA, current health conditions, musculature) is targeted.
CoTh enables an understanding of the individual self and its surrounding based on the micro-scale information that combines with macro-scale data to enable prediction of different life scenarios. It is defined as an abundant supply of predictive computation capabilities of high performance and large-scale applicability with high accuracy and quality so as to allow for providing humanity’s physical, physiological, mental, and spiritual needs in a profound and as of yet unfathomed manner.
Its core is strongly connoted with physical systems engineering. Thus, a parallelly-conducted research on the notion of computation deploys computational models as the primary representations of physics, instead of attempting to approximate first principles ever closer. A theory of treating models as dynamic systems themselves follows. This evokes the promise for a fundamental breakthrough in terms of computational semantics the significance of which becomes paramount as far as the faithfulness of the predictions is considered.
Dr. Justyna Zander is a Postdoctoral Research Scientist at Harvard University (Harvard Humanitarian Initiative) in Cambridge, MA, USA (since 2009) and Project Manager at the Fraunhofer Institute for Open Communication Systems in Berlin, Germany (since 2004). She holds Doctorate of Engineering Science (2008) and Master of Science (2005), both in the fields of Computer Science and Electrical Engineering from Technical University Berlin, Germany, Bachelor of Science (2004) in Computer Science, and Bachelor of Science (2003) in Environmental Protection and Management from Gdansk University of Technology, Poland.
She graduated from the Singularity University, Mountain View, CA, USA in 2009 where she then was a Teaching Fellow in 2010. Before she was a visiting scholar at the University of California in San Diego, CA, USA in 2007, and a visiting researcher at The MathWorks in Natick, MA, USA in 2008. Her research interests include heterogeneous system development, design, simulation, computation, humanities, and future studies.
For her scientific efforts Dr. Zander received grants and scholarships from such institutions as Polish Prime Ministry (1999-2000), Polish Ministry of Education and Sport (2001–2004), German Academic Exchange Service (2002), European Union (2003-2004), Hertie Foundation (2004-2005), IFIP TC6 (2005), German National Academic Foundation Grant (2005-2008), IEEE (2006), Siemens (2007), Metodos y Tecnologia (2008), Singularit
Far Beyond Smartphones - David Wood - H+ Summit @ HarvardHumanity Plus
Far Beyond Smartphones:
Lessons From Disruptive Technology, Open collaboration, and Breakthrough Mobile products
David Wood has spent more than 20 years envisioning, architecting, implementing, supporting, and avidly using smart mobile devices (devices that can also be called "personal electronic brains"): ten years with PDA manufacturer Psion PLC, and then ten more with smartphone operating system specialist Symbian Ltd. He was centrally involved in preparations and planning for the open source Symbian Foundation. Over that time, many lessons have emerged, highly relevant to the H+ mission to explore how humanity will be radically changed by technology in the near future:
What factors cause both spurts and slowdowns in technology development? What enables new technology visions to "cross the chasm" towards mainstream adoption? Given the history of improvements in smart mobile devices over the last 20 years, what can we realistically expect in the next 20 years? How credible is the vision of mobile devices helping billions of people to collect data that can be used for science and advance human knowledge? To what extent can technological progress be foreseen, and to what extent is the process chaotic, risky, and even dangerous?
David Wood spent ten years with PDA manufacturer Psion PLC, and then ten more with smartphone operating system specialist Symbian Ltd, where he was co-founder and executive vice president.
His background includes: many years building and integrating UI system software and application frameworks in 16-bit and 32-bit versions of “EPOC” software (later named “Symbian OS”); growing and directing the technical consulting teams that worked with leading phone manufacturers to create the world’s first successful smartphones; and defining and running development programs to stimulate and nurture the fast-growing Symbian partner ecosystem.
From the first half of 2008, he was involved in preparations and planning for the independent open source Symbian Foundation. He served on the Leadership Team of the Symbian Foundation as “Catalyst and Futurist” until October 2009. I continue these same roles from within Delta Wisdom.
He has an MA in Mathematics from Cambridge University and an honorary doctorate in science from the University of Westminster.
In September 2009 he was included in T3's list of "100 most influential people in technology": http://tech100.t3.com/list/80-61/.
Military 2.0 - Patrick Lin - H+ Summit @ HarvardHumanity Plus
For better or worse, the military is a major driver of technological, world-changing innovations, such as the Internet. At the same time, wars and armed conflicts are a key roadblock in the evolution of humanity. Therefore, to understand how emerging technologies will change our lives, we must look at their military origins as a harbinger of things to come for society at large. This presentation will focus on ethical and policy questions arising from two key areas making headlines today and in the future: human enhancement technologies and robotics.
For instance, are there moral or practical issues with eliminating human emotions such as fear or anger, which have led to abuses and accidents in wartime? Must these enhancements (and others, such as super-strength) be temporary or reversible, considering that soldiers usually return to civilian life? Robots can discourage such abuses if equipped with cameras, becoming objective and unblinking observers on the battlefield, but would this erode cohesion and trust among soldiers – and in the civilian realm, would surveillance robots infringe on our privacy? Generally, would these new technologies make it easier to engage in war, since they would lower political costs by reducing the number of casualties on our side – if so, is it immoral, or otherwise counterproductive to humanity's progress, to develop these capabilities?
Patrick Lin is the director of the Ethics + Emerging Sciences Group , based at California Polytechnic State University, San Luis Obispo. Most recently, he has led research efforts that culminated in two major reports: Autonomous Military Robotics: Risk, Ethics, and Design (funded by the U.S. Dept. of Defense/Navy, 2008) and Ethics of Human Enhancement: 25 Questions & Answers (funded by the U.S. National Science Foundation, 2009). He has published several books and papers in the field of technology ethics, including a new monograph What Is Nanotechnology and Why Does It Matter?: From Science to Ethics (Wiley-Blackwell, 2010) and a forthcoming anthology Robot Ethics: The Social and Ethical Implication of Robotics (MIT Press, in preparation). Dr. Lin earned his B.A. from University of California at Berkeley, M.A. and Ph.D. from University of California at Santa Barbara, and completed a three-year post-doctoral appointment at Dartmouth College. He is currently an assistant professor in Cal Poly’s philosophy department and an ethics fellow at the U.S. Naval Academy.
Sparking Our Neural Humanity - M. A. Greenstein - H+ Summit - Humanity+Humanity Plus
M. A. Greenstein, an internationally recognized commentator, researcher and coach on best and future practices for "opening the doors of perception". Based in L. A. with networked alliances throughout the AsiaPacific region, she founded The George Greenstein Institute and the future-focused e-zine BODIES IN SPACE to advance global change in designing creative and holistic learning systems as well as to encourage progressive leadership in related issues of neurotech innovation and designing sustainable lifestyles. Dedicated to BIG THINKING energized by visionary "sci-art" and anchored by S.I.T. (Somatic Intelligence Training), Dr. G is a whole-brain systems generator who privileges "interoception" as a search engine for catapulting & mapping best design images and ideas.
An Adjunct Associate Professor at Art Center College of Design, Dr. G. is also member of TED, Mindshare.la, The Neuroleadership Institute and in alliance with the Society for Neuroscience and the Neurotechnology Industry Organization.
Democratizing The Genome - Melanie Swan - H+ Summit @ HarvardHumanity Plus
Exponential declines in the cost of human genome sequencing is starting to put applications in the hands of researchers and consumers that were only dreamt of previously. Individuals are starting to have access to their own genomic data which can be actionable in a variety of ways including drug response, health condition analysis, athletic capability, and ancestry. DIYgenomics is a new platform bringing citizen scientists together to run peer cohort research studies and conduct novel research linking genetic data and physical biomarkers. Some norms are developing in response to the variety of community-based research issues that arise such as adaptive studies, informed consent, security, anonymity, and study design.
Melanie Swan is a genomics researcher, hedge fund manager, and leader in the Health 2.0 movement. Recent publications include “Multigenic Condition Risk Assessment in Direct-to-Consumer Genomic Services,” “Engineering Life into Technology,” and “Emerging Patient-Driven Health Care Models.” She serves as an advisor to research foundations, government agencies, corporations, and startups. Melanie has an MBA from the Wharton School of the University of Pennsylvania and a BA from Georgetown University. She is an advisor and faculty member at Singularity University.
Altered Carbon - Andrew Hessel - H+ Summit @ HarvardHumanity Plus
Andrew Hessel is an outspoken advocate and champion of DNA technologies, catalyzing new project developments, investment, and relationships in synthetic biology and bioengineering. His overarching message is that biology is poised to become the IT industry of the 21st century, fueled by a new generation of young researchers and entrepreneurs armed with technologies like DNA sequencing and synthesis that are becoming exponentially more powerful yet increasingly inexpensive. The possible applications are virtually limitless and include the typical global challenges (sustainable fuel production, environmental remediation, and better diagnosis treatment of human disease) but also extend into new, uncharted scientific territories. His popular lectures at the Singularity University and his visioning work reinforce that the foundations for this new industry are already in place, that it will grow explosively once the first few killer applications find commercial success, and that it will change the world, and humanity itself, in profound yet perhaps evolutionary necessary ways.
Do We Click? - Laurent Silbert - H+ Summit @ HarvardHumanity Plus
Do we click?
Speaker-listener neural coupling underlies successful communication
Verbal communication enables us to directly convey information across brains, independent of the actual external state of affairs (e.g. telling a story of past events). Such phenomenon may be reflected in the ability of the speaker to directly induce similar brain patterns in another individual, via speech, in the absence of any other stimulation. The recording of the neural responses from both the speaker brain and the listener brain opens a new window into the neural basis of interpersonal communication, and may be used to assess verbal and non-verbal forms of interaction in both human and other model systems. Further understanding of the neural processes that facilitate neural coupling across interlocutors may shed light on the mechanisms by which our brains interact and bind to form societies.
The capacity to communicate internal thoughts from one person to another is at the foundation of human society. Communication naturally requires an interaction between at least two people. Existing neurolinguistic studies are concerned, however, either with speech production or with the comprehension of isolated words or sentences. Little is known, therefore, about the underlying neuronal mechanism that facilitates the transfer of information between two brains during communication.
Understanding the interaction between a speaker’s brain and a listener’s brain in the context of real-world communication requires the development of new experimental paradigms. Using function Magnetic Resonance Imaging (fMRI), we measured neural signals from two brains (a speaker and a listener) during a complex everyday communication. We then built a simple, interpretable model that leverages the dynamics of fMRI and uses the speaker’s brain responses as a model for predicting the brain responses within the listener. Our model reveals that during successful communication, the speaker and listener’s brains exhibit joint, temporally coupled, response patterns. Such speaker-listener neural coupling vanishes when participants fail to communicate (such as with different languages). The temporal nature of this speaker-listener coupling suggests that an ability to evoke similar brain patterns in another individual via speech may gate our communication abilities. Moreover, while in most areas the listeners’ brain responses mirror the speaker’s responses with a delay, some areas in the listeners’ brain exhibit predictive anticipatory responses. Finally, we found that the extent of the anticipatory neuronal coupling between interlocutors is predictive of communicative success.
Currently a PhD candidate in Neuroscience from Princeton University, Silbert also has a Bachelors from University of Pennsylvania (Biology and Photography), Masters in Neuroscience from Mt. Sinai School of Medicine, and Masters in Psychology from NYU.
HACCP as a Lifespan Extension Management System - Morris Johnson - H+Summit @...Humanity Plus
HACCP , a comprehensive self-directed management system which is used to ensure food and pharmaceutical safety can be readily adapted to personalized healthspan and lifespan plan creation and implementation. Preventative, regenerative, enhancement, crisis-management and palliative medicine incorporated into the 12 step HACCP management system life-cycle can manage hazards which contribute to finite lifespan and reduced healthspan by managing 8 critical control domains. Hazard analysis of 6 management domains, including 8 moral hazard sub-domains can empower individuals to create their own personal longevity dividend. It is asserted that the profitability of statistically compliant mortality creates an economic incentive which undermines efforts to shift the global economic paradigm to one which encourages extreme healthspan and lifespan. Adaptation of universally accepted HACCP methodologies can drive transhumanism's fundamental principles and the concept of universally accessible extreme lifespan into mainstream acceptance.
Born at Radville , Saskatchewan 21 Sept , 1955."Citizen scientist" and Chief Technology Officer of Lifespan Pharma Inc. Since 1973 made a career of farming, trucking, oilfield industry work and entrepreneurial business. 1977-79 Director on the board of Bison-Hybrid International Association. 1978 to present time active in public policy development with the Saskatchewan NDP; Provincial Election Candidate for NDP in the Estevan Constituency in the 2007 election. 1982 organized, founded and served as president of the Canadian Bison Association. 2006 acquired a certificate in HACCP system creation and system management from the Food Centre on the University of Saskatchewan Campus. 2006 completed "Longevity Dividend" course directed by James Hughes, IEET. 2005 founded Lifespan Pharma Incorporated, a company which is commercializing a patent and trademarks as well as trade secrets for producing and marketing a new food supplement ingredient CANTERPENE.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
4. What is thought?
• How are thoughts structured?
• How does this structure support
flexible, successful thinking?
5. What is thought?
• How are thoughts structured?
• How does this structure support
flexible, successful thinking?
What mathematical principles can help us understand
thought?
6. What is thought?
• How are thoughts structured?
• How does this structure support
flexible, successful thinking?
e ngi ne e r
What mathematical principles can help us understand
thought?
19. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Compositional
representations
20. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Compositional
representations
21. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Why did he yell at me?
Compositional
representations
22. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Why did he yell at me?
He wanted to hurt me.
He thought I was a telemarketer.
Compositional
representations
23. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Why did he yell at me?
Belief Desire
Action
He wanted to hurt me.
He thought I was a telemarketer.
Compositional
representations
24. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Why did he yell at me?
Belief Desire
Action
He wanted to hurt me.
He thought I was a telemarketer.
Compositional Probabilistic
representations inference
25. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Compositional Probabilistic
representations inference
26. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Compositional Probabilistic
representations inference
27. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
a+b+c =
Compositional Probabilistic
representations inference
28. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
a+b+c =
0 1 2 3
Compositional Probabilistic
representations inference
29. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
a+b+c =
0 1 2 3
P (H|d) ∝ P (d|H)P (H)
Compositional Probabilistic
representations inference
30. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Compositional Probabilistic
representations inference
31. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
Compositional Probabilistic
representations inference
32. Composition and probability
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
∀x King(x) =⇒ M an(x)
∀y M an(y) ⇐⇒ ¬W oman(y)
Compositional Probabilistic
representations inference
33. Composition and probability
Probabilistic language of
thought hypothesis
Thought is productive: Thought is useful
“the infinite use of in an uncertain
finite means” world
∀x King(x) =⇒ M an(x)
∀y M an(y) ⇐⇒ ¬W oman(y)
Compositional Probabilistic
representations inference
41. A probabilistic language
Lambda calculus:
(define double
(double 3) => 6
(λ (x) (+ x x)))
(define repeat
(λ (f) (λ (x) (f (f x)))))
((repeat double) 3) => 12
Probabilistic lambda calculus:
(define a (flip 0.3))
(define b (flip 0.3))
(define c (flip 0.3))
(+ a b c)
Goodman, Mansinghka, Roy, Bonawitz, Tenenabum (2008)
42. A probabilistic language
Lambda calculus:
(define double
(double 3) => 6
(λ (x) (+ x x)))
(define repeat
(λ (f) (λ (x) (f (f x)))))
((repeat double) 3) => 12
Probabilistic lambda calculus:
(define a (flip 0.3)) => 1
(define b (flip 0.3)) => 0
(define c (flip 0.3)) => 1
(+ a b c) => 2
Goodman, Mansinghka, Roy, Bonawitz, Tenenabum (2008)
43. A probabilistic language
Lambda calculus:
(define double
(double 3) => 6
(λ (x) (+ x x)))
(define repeat
(λ (f) (λ (x) (f (f x)))))
((repeat double) 3) => 12
Probabilistic lambda calculus:
(define a (flip 0.3)) => 1 0
(define b (flip 0.3)) => 0 0
(define c (flip 0.3)) => 1 0
(+ a b c) => 2 0
Goodman, Mansinghka, Roy, Bonawitz, Tenenabum (2008)
44. A probabilistic language
Lambda calculus:
(define double
(double 3) => 6
(λ (x) (+ x x)))
(define repeat
(λ (f) (λ (x) (f (f x)))))
((repeat double) 3) => 12
Probabilistic lambda calculus:
(define a (flip 0.3)) => 1 0 0
(define b (flip 0.3)) => 0 0 0
(define c (flip 0.3)) => 1 0 1
(+ a b c) => 2 0 1
Goodman, Mansinghka, Roy, Bonawitz, Tenenabum (2008)
45. A probabilistic language
Lambda calculus:
(define double
(double 3) => 6
(λ (x) (+ x x)))
(define repeat
(λ (f) (λ (x) (f (f x)))))
((repeat double) 3) => 12
Probabilistic lambda calculus:
(define a (flip 0.3)) => 1 0 0
(define b (flip 0.3)) => 0 0 0
(define c (flip 0.3)) => 1 0 1
(+ a b c) => 2 0 1 ..
Goodman, Mansinghka, Roy, Bonawitz, Tenenabum (2008)
46. A probabilistic language
Lambda calculus:
(define double
(double 3) => 6
(λ (x) (+ x x)))
(define repeat
(λ (f) (λ (x) (f (f x)))))
((repeat double) 3) => 12
Probabilistic lambda calculus:
probability / frequency
(define a (flip 0.3)) => 1 0 0
(define b (flip 0.3)) => 0 0 0
(define c (flip 0.3)) => 1 0 1
(+ a b c) => 2 0 1 ..
0 1 2 3
Goodman, Mansinghka, Roy, Bonawitz, Tenenabum (2008)
47. Hypothesis
• The probabilistic language of thought
hypothesis:
Mental representations are functions
in a probabilistic lambda calculus.
• Thoughts are built compositionally (like molecules).
• Thinking is probabilistic inference.
http://projects.csail.mit.edu/church
48. Bob’s box
Goodman, Baker, Tenenbaum (2009; in prep.)
49. Bob’s box
• Bob has a box with two
buttons and a light.
A B
Goodman, Baker, Tenenbaum (2009; in prep.)
50. Bob’s box
• Bob has a box with two
buttons and a light.
A B
• He presses both buttons,
and the light comes on.
Goodman, Baker, Tenenbaum (2009; in prep.)
51. Bob’s box
• Bob has a box with two
buttons and a light.
A B
• He presses both buttons,
and the light comes on.
• How does the
box work? A A A A A
B B B B B
C C C C C
A alone B alone A or B A and B Nothing
causes C. causes C. cause C. causes C. causes C.
Goodman, Baker, Tenenbaum (2009; in prep.)
52. Human judgements
Social
50
*
40 Social condition
Mean Bets ($)
30
Physical
50
20 Physical condition
40 ns
30
10
20
0
A B AorB A&B none 10
N=15 0
A B AorB A&B none
A alone B alone A or B A and B Nothing
causes C. causes C. cause C. causes C. causes C.
53. Purely causal learning
Causal!only model
0.5
(query
Causal-only
(define world-cs (cs-prior)) 0.4
(define action (uniform))
Probability
0.3
(define outcome (world-cs
init-state 0.2
action))
0.1
world-cs
(and (press-A action) 0
A B AorB A&B none
A or B
A&B
B only
none
A only
(press-B action) Cause of C
(light-on outcome)))
No conclusion is possible.
The evidence is confounded.
54. Explaining actions
Beliefs: Desires:
A B
C
Decision
Rational action:
Actions: (define decide
(λ (state causal-model utility)
(query
(define action (action-prior))
action
(flip (utility
(causal-model
state action))))))
55. Causal learning models Causal!only model
0.5
Causal-only model
Causal-only
Causal-only
(define world-cs (cs-prior)) 0.4
(define action (uniform)) model
Probability
(define outcome (world-cs 0.3
init-state 0.2
action))
0.1
0
A B AorB A&B none
A or B
A&B
B only
none
A only
Cause of C
(define world-cs (cs-prior))
(define utility (uniform))
Social & causal
(define cs-belief world-cs) Knowledgeable
(define action (decide
init-state
agent assumption
cs-belief Rational
utility))
(define outcome (world-cs agent assumption
init-state
action))
56. Causal learning models Causal!only model
0.5
Causal-only model
Causal-only
Causal-only
(define world-cs (cs-prior)) 0.4
(define action (uniform)) model
Probability
(define outcome (world-cs 0.3
init-state 0.2
action))
0.1
0
A B AorB A&B none
A or B
A&B
B only
none
A only
Cause of C
(define world-cs (cs-prior))
(define utility (uniform))
Social & causal
(define cs-belief world-cs)
(define action (decide
init-state
cs-belief
utility))
(define outcome (world-cs
init-state
action))
57. Causal learning models Causal!only model
0.5
Causal-only model
Causal-only
(define world-cs (cs-prior)) 0.4
(define action (uniform))
Probability
(define outcome (world-cs 0.3
init-state 0.2
action))
0.1
0
A B AorB A&B none
A or B
A&B
B only
none
A only
Cause of C
(define world-cs (cs-prior))
(define utility (uniform))
Social & causal
(define cs-belief world-cs)
(define action (decide
init-state
cs-belief
utility))
(define outcome (world-cs
init-state
action))
58. Causal learning models Causal!only model
0.5
Causal-only model
Causal-only
(define world-cs (cs-prior)) 0.4
(define action (uniform))
Probability
(define outcome (world-cs 0.3
init-state 0.2
action))
0.1
0
A B AorB A&B none
A or B
A&B
B only
none
A only
Cause of C
(define world-cs (cs-prior))
(define utility (uniform))
Social & causal
Social!causal model
0.5
(define cs-belief world-cs) Social + causal model
(define action (decide 0.4
init-state
Posterior probability
Probability
0.3
cs-belief
utility)) 0.2
(define outcome (world-cs
init-state 0.1
action)) 0
A B AorB A&B none
59. Scalar implicature
Some of the plants
have sprouted
(Plants usually sprout.) Goodman, et al (in prep)
60. Scalar implicature
Desires:
-informative
Beliefs -parsimonious
Actions:
“...”
Some of the plants
have sprouted
(Plants usually sprout.) Goodman, et al (in prep)
61. Scalar implicature
Desires: Model:
-informative
Beliefs -parsimonious
Plausibility (Z-score)
2
1
0
-1
Actions:
-2
“...” 0:5 1:5 2:5 3:5 4:5 5:5
Number sprouted
Some of the plants
have sprouted
(Plants usually sprout.) Goodman, et al (in prep)
62. Scalar implicature
Desires: Model:
-informative
Beliefs -parsimonious
Plausibility (Z-score)
2
1
0
-1
Actions:
-2
“...” 0:5 1:5 2:5 3:5 4:5 5:5
Number sprouted
Some of the plants
have sprouted
(Plants usually sprout.) Goodman, et al (in prep)
63. Scalar implicature
Desires: Model: Partial
-informative Full knowledge knowledge
Beliefs -parsimonious
Plausibility (Z-score)
2
1
0
-1
Actions:
-2
“...” 0:5 1:5 2:5 3:5 4:5 5:5 0:5 1:5 2:5 3:5 4:5 5:5
Number sprouted
Some of the plants
have sprouted
(Plants usually sprout.) Goodman, et al (in prep)
64. Scalar implicature
Desires: Model: Partial
-informative Full knowledge knowledge
Beliefs -parsimonious
Plausibility (Z-score)
2
1
0
-1
Actions:
-2
“...” 0:5 1:5 2:5 3:5 4:5 5:5 0:5 1:5 2:5 3:5 4:5 5:5
Number sprouted
Human:
Some of the plants
have sprouted
(Plants usually sprout.) Goodman, et al (in prep)
65. Summary
• The probabilistic language of thought
combines composition and probability.
• We can explain complex, flexible human
thinking...
• And engineer flexible computer
intelligence.
Editor's Notes
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
History: Two computational principles...
To explain real cognition we need both.
My research: unify these ideas,
Tackle new areas - the real payoff.
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
Named for Alonzo Church
We have a formalism for stochastic functions
..church is universal for both representation and inference.
rest of talk -- schematic church.. broader framework..
Intuition: why would he have pressed both buttons unless he had to?
Intuition: why would he have pressed both buttons unless he had to?
Intuition: why would he have pressed both buttons unless he had to?
Intuition: why would he have pressed both buttons unless he had to?
Intuition: why would he have pressed both buttons unless he had to?
Intuition: why would he have pressed both buttons unless he had to?
Intuition: why would he have pressed both buttons unless he had to?
But where do actions come from, and why are actions diagnostic of cs-world?