Nature is the ultimate complex system. Nature 1.0 is seeds & soil. *Evolving.* Nature 2.0 adds silicon & steel. *Evolving.*
Presented to Complex Systems Group, Stanford University, on May 4, 2018.
Curated Proof Markets & Token-Curated Identities in Ocean ProtocolTrent McConaghy
This talk describes Ocean Protocol’s token mechanics via step-by-step examples of how users earn tokens by curating data and making it available.
Blog post: https://medium.com/@trentmc0/curated-proofs-markets-a-walk-through-of-oceans-core-token-mechanics-3d50851a8005
Presented at 9984 Blockchain Meetup, Berlin, Mar 28, 2018
This talk describes how tokens/decentralization and complex systems relate. Contents:
-blockchains as trust machines
-blockchains as incentive machines
-evolutionary algorithm design (and agent based simulation) for token design
-benevolent computer viruses (aka smart contracts)
-AI DAOs
-blockchains as life
Presented at Santa Fe Institute, New Mexico, Jan 31, 2018
Video at: https://medium.com/abq-blockchain-community/talking-blockchain-ai-complex-systems-3c5a33676f85
This talk describes the problem of data silos, and the root cause which is lack of incentive to share. Ocean Protocol aims to democratize data for use by AI, by leveraging blockchain incentives. It uses a Proofed Curation Market construction, which combines cryptographic proof (e.g. proof of availability) with curation markets.
The core feature of tokenized ecosystems, aka public blockchains, is getting people to do stuff. In this talk, I give more structure to this idea using a framing from optimization literature, and more precisely, evolutionary algorithms (EAs). I give examples of this approach using Bitcoin and Ocean Protocol as examples.
Link to video: https://www.youtube.com/watch?v=Sm8j0u5NuGQ
[Energy/abundance edition] Nature 2.0: The Cradle of Civilization Gets an Upg...Trent McConaghy
Nature is the cradle of civilization.
Nature 1.0 is seeds & soil.
Nature 2.0 adds silicon & steel. AI, blockchain, IoT towards abundance.
Keynote address at Startup Energy Transition (SET) Festival, Apr 16, 2018, Berlin, Germany
Related blog post:
https://medium.com/@trentmc0/nature-2-0-27bdf8238071
This talk is a first stake in the ground towards a practice of token engineering: the theory, practice and tools to analyze, design, and verify tokenized ecosystems.
We frame token design as optimization design, then use optimization design methodology for token design. Furthermore, we can document emerging patterns for token design. We give a case study: the design of Ocean Protocol.
This talk was presented at Ethereum Community Conference (EthCC) in Paris, Mar 8, 2018
Related essay: https://blog.oceanprotocol.com/towards-a-practice-of-token-engineering-b02feeeff7ca
In AI, it's all about the data. But it's hard to get the data, and to get *good* data with provenance. This talk shows how blockchains can help, with real-world examples including:
-a data exchange for self-driving car data (with Toyota Research and others)
-pooling designs for 3d printing fraud detection (with Innogy and others)
-and AI DAOs - AIs that can accumulate wealth
This was given as an invited talk at Consensus 2017, May 22 in NYC.
This talk introduces Ocean protocol. It describes:
-how data drives AI (artificial intelligence)
-the gap between data-haves and AI-haves
-the data silo crisis
-how Ocean addresses these issues by creating a substrate to catalyze a flowering of data marketplaces
-the Ocean structured approach to token design, from values to stakeholders to software stack.
Video: https://www.youtube.com/watch?v=fMDD0aTVt4s
This talk was presented at "9984 Summit - Blockchain Futures for Developers, Enterprises, and Society" hosted by IPDB & BigchainDB.
Curated Proof Markets & Token-Curated Identities in Ocean ProtocolTrent McConaghy
This talk describes Ocean Protocol’s token mechanics via step-by-step examples of how users earn tokens by curating data and making it available.
Blog post: https://medium.com/@trentmc0/curated-proofs-markets-a-walk-through-of-oceans-core-token-mechanics-3d50851a8005
Presented at 9984 Blockchain Meetup, Berlin, Mar 28, 2018
This talk describes how tokens/decentralization and complex systems relate. Contents:
-blockchains as trust machines
-blockchains as incentive machines
-evolutionary algorithm design (and agent based simulation) for token design
-benevolent computer viruses (aka smart contracts)
-AI DAOs
-blockchains as life
Presented at Santa Fe Institute, New Mexico, Jan 31, 2018
Video at: https://medium.com/abq-blockchain-community/talking-blockchain-ai-complex-systems-3c5a33676f85
This talk describes the problem of data silos, and the root cause which is lack of incentive to share. Ocean Protocol aims to democratize data for use by AI, by leveraging blockchain incentives. It uses a Proofed Curation Market construction, which combines cryptographic proof (e.g. proof of availability) with curation markets.
The core feature of tokenized ecosystems, aka public blockchains, is getting people to do stuff. In this talk, I give more structure to this idea using a framing from optimization literature, and more precisely, evolutionary algorithms (EAs). I give examples of this approach using Bitcoin and Ocean Protocol as examples.
Link to video: https://www.youtube.com/watch?v=Sm8j0u5NuGQ
[Energy/abundance edition] Nature 2.0: The Cradle of Civilization Gets an Upg...Trent McConaghy
Nature is the cradle of civilization.
Nature 1.0 is seeds & soil.
Nature 2.0 adds silicon & steel. AI, blockchain, IoT towards abundance.
Keynote address at Startup Energy Transition (SET) Festival, Apr 16, 2018, Berlin, Germany
Related blog post:
https://medium.com/@trentmc0/nature-2-0-27bdf8238071
This talk is a first stake in the ground towards a practice of token engineering: the theory, practice and tools to analyze, design, and verify tokenized ecosystems.
We frame token design as optimization design, then use optimization design methodology for token design. Furthermore, we can document emerging patterns for token design. We give a case study: the design of Ocean Protocol.
This talk was presented at Ethereum Community Conference (EthCC) in Paris, Mar 8, 2018
Related essay: https://blog.oceanprotocol.com/towards-a-practice-of-token-engineering-b02feeeff7ca
In AI, it's all about the data. But it's hard to get the data, and to get *good* data with provenance. This talk shows how blockchains can help, with real-world examples including:
-a data exchange for self-driving car data (with Toyota Research and others)
-pooling designs for 3d printing fraud detection (with Innogy and others)
-and AI DAOs - AIs that can accumulate wealth
This was given as an invited talk at Consensus 2017, May 22 in NYC.
This talk introduces Ocean protocol. It describes:
-how data drives AI (artificial intelligence)
-the gap between data-haves and AI-haves
-the data silo crisis
-how Ocean addresses these issues by creating a substrate to catalyze a flowering of data marketplaces
-the Ocean structured approach to token design, from values to stakeholders to software stack.
Video: https://www.youtube.com/watch?v=fMDD0aTVt4s
This talk was presented at "9984 Summit - Blockchain Futures for Developers, Enterprises, and Society" hosted by IPDB & BigchainDB.
AI for Good is starting to be demonstrated, in addressing impact problems like the UN Sustainable Development Goals. But how can we scale it? This talk describes how an AI Commons manifested as a blockchain public utility network -- Ocean Protocol -- can be a key part of the solution.
This talk was a keynote at DutchChain Odyssey conference, Den Bosch, Feb 4, 2019.
Ocean Protocol: New Powers for Data ScientistsTrent McConaghy
Summary of benefits: more data, AI data/compute provenance, new income opportunities.
This talk was presented at WorldSummitAI in Amsterdam, October, 2018.
Energy Data Access Management with Ocean ProtocolTrent McConaghy
Video: https://www.youtube.com/watch?v=lC50EARadwo&feature=youtu.be
Outline:
-Problem 1: Gap between problem owners & problem solvers
-Problem 2: Want more data for accuracy, but it raises privacy and control issues
-Solution: Decentralized orchestration as a foundation
-Solution 1: Connect problem owners and problem solvers with marketplaces & commons on top of foundation
-Solution 2: Bring compute to on-premise data
-Use cases & collaborators
Opportunities for Genetic Programming Researchers in BlockchainTrent McConaghy
Summary of opportunities:
-Compute+++
-Data+++
-Evolve code: Solidity, EVM or WASM bytecode
-“Unstoppable” evolution
-Evolvable ArtDAO
-Agent life forms
The Evolution of Blue Ocean Databases, from SQL to BlockchainTrent McConaghy
1. The evolution of blue ocean databases, from Oracle to MySQL to MongoDB to BigchainDB
2. Decentralized software stacks, including decentralized file systems, decentralized databases, and decentralized processing (smart contracts)
[This was presented at a BigchainDB Hackfest, Feb 2017 in Berlin]
Top-Down? Bottom Up? A Survey of Hierarchical Design MethodologiesTrent McConaghy
How do you optimize, synthesize, or evolve a design that has 10 thousand parts? 10 billion? Doing it flat could easily fail.
There is a way! In many practical cases, we can recursively decompose the problem into many sub-problems. We then solve each sub-problem and stitch it together to solve the main problem. We can do this in a well-structured fashion using a hierarchical design methodology. There are top-down and bottom-up variants; both achieve remarkable results. This talk explores those approaches. Could this be how nature scales?
Video:
T. McConaghy, "Top Down? Bottom Up? A Survey of Hierarchical Design Methodologies", Machine Learning Group Berlin, Berlin, Feb. 26, 2018
https://www.youtube.com/watch?v=FvxwIplXBQw.
Original paper:
G. G. E. Gielen, T. McConaghy, T. Eeckelaert, "Performance space modeling for hierarchical synthesis of analog integrated circuits", in Proc. Design Automation Conference (DAC), pp. 881-886, June 13-17, 2005.
http://trent.st/content/2005_DAC_hierarchy.pdf
What will the Web3 Data Economy look like?
The shadow money economy (closed, power concentrated) moved to the token economy (open, permissionless). It has a base layer of reserve currency / store of value (BTC), unit of exchange (ETH) and token / app launch platform (Ethereum). And there are financial & utility last miles, like wallets, exchanges, and dapps.
We envision similar for the data economy. The shadow data economy (closed, power concentrated) will move to the Web3 data economy (open, permissionless). It will have a base layer of reserve currency / store of value, a unit of exchange, and data asset launch platform. Ocean Protocol's design provides all of these as a substrate, with artificial intelligence use cases as the linchpin. Finally, just like the token economy there are financial & utility last miles, like data wallets, data exchanges, and data science tools using Ocean tokens.
This talk was presented at the Web3 Summit, Berlin, Oct 22-24, 2018
PDF version: http://trent.st/content/20181022.2%20Web3%20Summit%20-%20McConaghy.pdf
DN 2017 | A New Data Economy with Power to the People | Trent McConaghy | B...Dataconomy Media
Trent McConaghy is an AI researcher and blockchain engineer. He is the Founder & CTO of BigchainDB. He started doing AI research for national defense as an undergrad, going on to obtain a PhD from KU Leuven. He has done Machine Learning research for the Canadian Department of National Defense, and has written two books and 35 papers, and holds 20 patents on Machine Learning, circuits and creativity.
DN2017 | From Big Data to Smart Data | Kirk Borne | Booz Allen HamiltonDataconomy Media
Smart data are essential when faced with massive-scale data collections. "Smart" refers to data that are tagged or indexed with meaning-filled metadata that carry information about the semantic meaning of the data, its applications, use cases, content, context, and more. Such meta-tags enable efficient and effective discovery, description, and delivery of the right data at the right time, both to humans and to automatic processes.
Kirk Borne is a data scientist and an astrophysicist who has used his talents at Booz Allen since 2015. He was professor of astrophysics and computational science at George Mason University (GMU) for 12 years. He served as undergraduate advisor for the GMU data science program and graduate advisor in the computational science and informatics Ph.D. program.
Predictive Analysis for Airbnb Listing Rating using Scalable Big Data PlatformSavita Yadav
KMIS International Conference 2021.
This talk aims to provide insights and performance of predictive models for Airbnb Rating using Big Data and distributed parallel computing systems. We have predicted and classified using Two-Class Classification models if a property has a high or a low rating based on the features of the listing. It helps the hosts to know if their property is suitable and how their listing compares to other similar listings. We compare the results and the performance of rating prediction models with accuracy and computing time metrics.
Predictive Analysis of Financial Fraud Detection using Azure and Spark MLJongwook Woo
This talk aims at providing insights, performance, and architecture on Financial Fraud Detection on a mobile money transactional activity in Azure ML and Spark. We have predicted and classified the transaction as normal or fraud with a small sample and massive data set using Azure ML and Spark ML, which are traditional systems and Big Data respectively. I will present predictive analysis with several classification models experimenting in Azure and Spark ML. Besides, scalability of Spark ML will be presented for the models with different number of nodes for Spark clusters in Amazon AWS.
Learn how Ocean Protocol can be used to further scientific research. A presentation by Ocean's Lead Data Scientist Marcus Jones at Blockchain for Science Conference in Berlin on November 3, 2019.
Scalable Predictive Analysis and The Trend with Big Data & AIJongwook Woo
The history and the latest trend of Big Data and Scalable Predictive Analysis for large scale data set using Distributed Machine Learning and Deep Learning with GPUs in Spark and Rapids; Invited talk at IS department of Yonsei University, Korea
Rating Prediction using Deep Learning and SparkJongwook Woo
Distributed Deep Learning to predict Amazon review data rating in Spark using Analytics Zoo on AWS, which is published at "Rating Prediction using Deep Learning and Spark" at The 11th Internation Conference on Internet (ICONI 2019), Hanoi, Vietnam, Dec 15 - 18 2019
Approximate "Now" is Better Than Accurate "Later"NUS-ISS
How does Twitter track the top trending topics?
How does Amazon keep track of the top-selling items for the day?
How many cabs have been booked this month using your App?
Is the password that a new user is choosing a common/compromised password?
Modern web-scale systems process billions of transactions and generate terabytes of data every single day. In order to find answers to questions against this data, one would initiate a multi-minute query against a NoSQL datastore or kick off a batch job written in a distributed processing framework such as Spark or Flink. However, these jobs are throughput-heavy and not suited for realtime low-latency queries. However, you and your customers would like to have all this information "right now".
At the end of this talk, you'll realize that you can power these low-latency queries and with incredibly low memory footprint "IF" you are willing to accept answers that are, say, 96-99% accurate. This talk introduces some of the go-to probabilistic data structures that are used by organisations with large amounts of data - specifically Bloom filter, Count Min Sketch and HyperLogLog.
Introduction to Big Data and its TrendsJongwook Woo
Big Data has been popular last 10 years using Hadoop and Spark for data analysis and prediction with large scale data sets in distributed parallel computing systems. Its platform has expanded using NoSQL DB and Search Engine as well and has been more popular along cloud computing. Then, Deep Learning has become a buzzword past several years using GPU and Big Data. It makes even small companies and labs to own supercomputers with a small amount of budgets, which is the situation of “Dream Comes True” in the IT and business. In this talk, the history and trends of Big Data and AI platforms are introduced and Big Data predictive analysis should be presented.
Introduction to Big Data and AI for Business Analytics and PredictionJongwook Woo
Big Data has been popular last 10 years using Hadoop and Spark for data analysis and prediction with large scale data sets in distributed parallel computing systems. Its platform has expanded using NoSQL DB and Search Engine as well and has been more popular along cloud computing. Then, Deep Learning has become a buzzword past several years using GPU and Big Data. It makes even small companies and labs to own supercomputers with a small amount of budgets, which is the situation of “Dream Comes True” in the IT and business. In this talk, the history and trends of Big Data and AI platforms are introduced and how predictive analysis should be presented in Business using Big Data & AI.
Trent McConaghy, CTO of BigchainDB, talks about the journey from blockchain databases, to DAOs to AI DAOs, covering everything from the architecture to knowledge extraction and machine creativity.
BigchainDB: Blockchains for Artificial Intelligence by Trent McConaghyBigchainDB
How can blockchains help AI?
-Decentralized model exchange
-Model audit trail
-AI DAOs
-more
A blockchain caveat or two
Completely new code bases
Reinventing consensus
No sharding = no scaling
No querying // single-node querying
Let’s fix this...
AI for Good is starting to be demonstrated, in addressing impact problems like the UN Sustainable Development Goals. But how can we scale it? This talk describes how an AI Commons manifested as a blockchain public utility network -- Ocean Protocol -- can be a key part of the solution.
This talk was a keynote at DutchChain Odyssey conference, Den Bosch, Feb 4, 2019.
Ocean Protocol: New Powers for Data ScientistsTrent McConaghy
Summary of benefits: more data, AI data/compute provenance, new income opportunities.
This talk was presented at WorldSummitAI in Amsterdam, October, 2018.
Energy Data Access Management with Ocean ProtocolTrent McConaghy
Video: https://www.youtube.com/watch?v=lC50EARadwo&feature=youtu.be
Outline:
-Problem 1: Gap between problem owners & problem solvers
-Problem 2: Want more data for accuracy, but it raises privacy and control issues
-Solution: Decentralized orchestration as a foundation
-Solution 1: Connect problem owners and problem solvers with marketplaces & commons on top of foundation
-Solution 2: Bring compute to on-premise data
-Use cases & collaborators
Opportunities for Genetic Programming Researchers in BlockchainTrent McConaghy
Summary of opportunities:
-Compute+++
-Data+++
-Evolve code: Solidity, EVM or WASM bytecode
-“Unstoppable” evolution
-Evolvable ArtDAO
-Agent life forms
The Evolution of Blue Ocean Databases, from SQL to BlockchainTrent McConaghy
1. The evolution of blue ocean databases, from Oracle to MySQL to MongoDB to BigchainDB
2. Decentralized software stacks, including decentralized file systems, decentralized databases, and decentralized processing (smart contracts)
[This was presented at a BigchainDB Hackfest, Feb 2017 in Berlin]
Top-Down? Bottom Up? A Survey of Hierarchical Design MethodologiesTrent McConaghy
How do you optimize, synthesize, or evolve a design that has 10 thousand parts? 10 billion? Doing it flat could easily fail.
There is a way! In many practical cases, we can recursively decompose the problem into many sub-problems. We then solve each sub-problem and stitch it together to solve the main problem. We can do this in a well-structured fashion using a hierarchical design methodology. There are top-down and bottom-up variants; both achieve remarkable results. This talk explores those approaches. Could this be how nature scales?
Video:
T. McConaghy, "Top Down? Bottom Up? A Survey of Hierarchical Design Methodologies", Machine Learning Group Berlin, Berlin, Feb. 26, 2018
https://www.youtube.com/watch?v=FvxwIplXBQw.
Original paper:
G. G. E. Gielen, T. McConaghy, T. Eeckelaert, "Performance space modeling for hierarchical synthesis of analog integrated circuits", in Proc. Design Automation Conference (DAC), pp. 881-886, June 13-17, 2005.
http://trent.st/content/2005_DAC_hierarchy.pdf
What will the Web3 Data Economy look like?
The shadow money economy (closed, power concentrated) moved to the token economy (open, permissionless). It has a base layer of reserve currency / store of value (BTC), unit of exchange (ETH) and token / app launch platform (Ethereum). And there are financial & utility last miles, like wallets, exchanges, and dapps.
We envision similar for the data economy. The shadow data economy (closed, power concentrated) will move to the Web3 data economy (open, permissionless). It will have a base layer of reserve currency / store of value, a unit of exchange, and data asset launch platform. Ocean Protocol's design provides all of these as a substrate, with artificial intelligence use cases as the linchpin. Finally, just like the token economy there are financial & utility last miles, like data wallets, data exchanges, and data science tools using Ocean tokens.
This talk was presented at the Web3 Summit, Berlin, Oct 22-24, 2018
PDF version: http://trent.st/content/20181022.2%20Web3%20Summit%20-%20McConaghy.pdf
DN 2017 | A New Data Economy with Power to the People | Trent McConaghy | B...Dataconomy Media
Trent McConaghy is an AI researcher and blockchain engineer. He is the Founder & CTO of BigchainDB. He started doing AI research for national defense as an undergrad, going on to obtain a PhD from KU Leuven. He has done Machine Learning research for the Canadian Department of National Defense, and has written two books and 35 papers, and holds 20 patents on Machine Learning, circuits and creativity.
DN2017 | From Big Data to Smart Data | Kirk Borne | Booz Allen HamiltonDataconomy Media
Smart data are essential when faced with massive-scale data collections. "Smart" refers to data that are tagged or indexed with meaning-filled metadata that carry information about the semantic meaning of the data, its applications, use cases, content, context, and more. Such meta-tags enable efficient and effective discovery, description, and delivery of the right data at the right time, both to humans and to automatic processes.
Kirk Borne is a data scientist and an astrophysicist who has used his talents at Booz Allen since 2015. He was professor of astrophysics and computational science at George Mason University (GMU) for 12 years. He served as undergraduate advisor for the GMU data science program and graduate advisor in the computational science and informatics Ph.D. program.
Predictive Analysis for Airbnb Listing Rating using Scalable Big Data PlatformSavita Yadav
KMIS International Conference 2021.
This talk aims to provide insights and performance of predictive models for Airbnb Rating using Big Data and distributed parallel computing systems. We have predicted and classified using Two-Class Classification models if a property has a high or a low rating based on the features of the listing. It helps the hosts to know if their property is suitable and how their listing compares to other similar listings. We compare the results and the performance of rating prediction models with accuracy and computing time metrics.
Predictive Analysis of Financial Fraud Detection using Azure and Spark MLJongwook Woo
This talk aims at providing insights, performance, and architecture on Financial Fraud Detection on a mobile money transactional activity in Azure ML and Spark. We have predicted and classified the transaction as normal or fraud with a small sample and massive data set using Azure ML and Spark ML, which are traditional systems and Big Data respectively. I will present predictive analysis with several classification models experimenting in Azure and Spark ML. Besides, scalability of Spark ML will be presented for the models with different number of nodes for Spark clusters in Amazon AWS.
Learn how Ocean Protocol can be used to further scientific research. A presentation by Ocean's Lead Data Scientist Marcus Jones at Blockchain for Science Conference in Berlin on November 3, 2019.
Scalable Predictive Analysis and The Trend with Big Data & AIJongwook Woo
The history and the latest trend of Big Data and Scalable Predictive Analysis for large scale data set using Distributed Machine Learning and Deep Learning with GPUs in Spark and Rapids; Invited talk at IS department of Yonsei University, Korea
Rating Prediction using Deep Learning and SparkJongwook Woo
Distributed Deep Learning to predict Amazon review data rating in Spark using Analytics Zoo on AWS, which is published at "Rating Prediction using Deep Learning and Spark" at The 11th Internation Conference on Internet (ICONI 2019), Hanoi, Vietnam, Dec 15 - 18 2019
Approximate "Now" is Better Than Accurate "Later"NUS-ISS
How does Twitter track the top trending topics?
How does Amazon keep track of the top-selling items for the day?
How many cabs have been booked this month using your App?
Is the password that a new user is choosing a common/compromised password?
Modern web-scale systems process billions of transactions and generate terabytes of data every single day. In order to find answers to questions against this data, one would initiate a multi-minute query against a NoSQL datastore or kick off a batch job written in a distributed processing framework such as Spark or Flink. However, these jobs are throughput-heavy and not suited for realtime low-latency queries. However, you and your customers would like to have all this information "right now".
At the end of this talk, you'll realize that you can power these low-latency queries and with incredibly low memory footprint "IF" you are willing to accept answers that are, say, 96-99% accurate. This talk introduces some of the go-to probabilistic data structures that are used by organisations with large amounts of data - specifically Bloom filter, Count Min Sketch and HyperLogLog.
Introduction to Big Data and its TrendsJongwook Woo
Big Data has been popular last 10 years using Hadoop and Spark for data analysis and prediction with large scale data sets in distributed parallel computing systems. Its platform has expanded using NoSQL DB and Search Engine as well and has been more popular along cloud computing. Then, Deep Learning has become a buzzword past several years using GPU and Big Data. It makes even small companies and labs to own supercomputers with a small amount of budgets, which is the situation of “Dream Comes True” in the IT and business. In this talk, the history and trends of Big Data and AI platforms are introduced and Big Data predictive analysis should be presented.
Introduction to Big Data and AI for Business Analytics and PredictionJongwook Woo
Big Data has been popular last 10 years using Hadoop and Spark for data analysis and prediction with large scale data sets in distributed parallel computing systems. Its platform has expanded using NoSQL DB and Search Engine as well and has been more popular along cloud computing. Then, Deep Learning has become a buzzword past several years using GPU and Big Data. It makes even small companies and labs to own supercomputers with a small amount of budgets, which is the situation of “Dream Comes True” in the IT and business. In this talk, the history and trends of Big Data and AI platforms are introduced and how predictive analysis should be presented in Business using Big Data & AI.
Trent McConaghy, CTO of BigchainDB, talks about the journey from blockchain databases, to DAOs to AI DAOs, covering everything from the architecture to knowledge extraction and machine creativity.
BigchainDB: Blockchains for Artificial Intelligence by Trent McConaghyBigchainDB
How can blockchains help AI?
-Decentralized model exchange
-Model audit trail
-AI DAOs
-more
A blockchain caveat or two
Completely new code bases
Reinventing consensus
No sharding = no scaling
No querying // single-node querying
Let’s fix this...
What is a Blockchain?
What is a Bitcoin?
What is a Crypto Currency?
What is an ICO (Initial Coin Offer)
Attend this workshop to Demystify you assumptions and gain grater knowledge about this latest technologies
In this workshop, Blockchain Marketer Mitchell Loureiro and Designer Paulo Fonseca will help you understand how to use blockchains to design social systems that fuel themselves. You'll learn how to construct your very own decentralised organisation and how to devise an incentive system based on blockchain technology.
Check for blockchain-related metadata: When art is created using blockchain technology, there may be metadata associated with the artwork that indicates its origins. This could include information about the artist, the date the artwork was created, the blockchain platform used, and more. If you have access to this metadata, you can examine it to see if it suggests that the artwork was generated using blockchain technology.
Look for a blockchain certificate: Some blockchain platforms offer certificates that can be used to verify the authenticity and provenance of art. These certificates may include information about the artwork's origins, the blockchain platform used, and more. If you have access to a certificate associated with the artwork in question, you can examine it to see if it suggests that the artwork was generated using blockchain technology.
Consult with experts: If you're unsure whether a particular piece of art was generated using blockchain technology, you can consult with experts in the field. This might include art historians, blockchain developers, or other professionals with expertise in art and technology. They may be able to examine the artwork and its associated metadata to provide insights into its origins and the technologies used to create it.
Check for other indicators: While there may not be a foolproof way to determine whether a piece of art was generated using blockchain technology, there may be other indicators that can help you make an informed guess. For example, you could look for stylistic or thematic similarities between the artwork in question and other pieces of art that are known to have been created using blockchain technology. You could also examine the artwork's provenance and history to see if there are any clues that suggest it may have been created using this technology.
Overall, verifying whether a piece of art was generated using blockchain technology may require a combination of these approaches, as well as careful analysis and consideration of the available evidence.
Reverse Image Search: One way to check if an artwork has been generated using AI is to perform a reverse image search on the artwork. This will help you identify if the artwork has been generated using an existing image dataset or if it is unique.
Metadata Analysis: Another method is to analyze the metadata of the artwork. If the metadata indicates that the artwork was created using an AI algorithm, then it is likely that the artwork was generated using AI.
Pixel Analysis: You can also perform pixel analysis on the artwork. AI-generated art often has a distinct pixel pattern that is different from traditional art.
Artist Verification: If the artwork is attributed to a specific artist, you can check if that artist has a history of using AI in their art. If the artist is known for using AI, it is likely that the artwork was generated using AI.
Most people want to jump into & know how to do ICO or participate in it. They trade buzzwords, follow the herd on blockchain & crypto token without knowing where do they really apply. The objective of this is to introduce important concepts that need to be understood before getting into ICO and crypto valuation. All these concepts are introduced gradually through a metaphor then simple mental models followed by a first principle thinking level. This is most useful for entrepreneurs and investors that ought to be thinking about blockchain, ICO & crypto tokens through first principles
My talk at the @media Ajax conference in London in November 2007 about the non-technical steps you can take to make JavaScript and Ajax work for larger teams.
Approaches for application request throttling - dotNetCologneMaarten Balliauw
Speaking from experience building a SaaS: users are insane. If you are lucky, they use your service, but in reality, they probably abuse. Crazy usage patterns resulting in more requests than expected, request bursts when users come back to the office after the weekend, and more! These all pose a potential threat to the health of our web application and may impact other users or the service as a whole. Ideally, we can apply some filtering at the front door: limit the number of requests over a given timespan, limiting bandwidth, ...
In this talk, we’ll explore the simple yet complex realm of rate limiting. We’ll go over how to decide on which resources to limit, what the limits should be and where to enforce these limits – in our app, on the server, using a reverse proxy like Nginx or even an external service like CloudFlare or Azure API management. The takeaway? Know when and where to enforce rate limits so you can have both a happy application as well as happy customers.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
10. STORE OF VALUE
Bitcoin, zcash
FILE SYSTEM
IPFS/FileCoin, Swarm
DATABASE
BigchainDB, OrbitDB
BIZ LOGIC
Ethereum, Dfinity
HIGH PERF.
COMPUTE
TrueBit, Golem, iExec
STATE
PolkaDot
DATA
TCP/IP
VALUE
Interledger,
Cosmos
COMMUNICATIONSPROCESSINGSTORAGE
“Emerging Decentralized Stack”
12. Bitcoin incentivizes security = hash rate = electricity
Result: > USA by mid 2019!
Get people to do stuff
By rewarding with tokens
“Incentive Machine”
14. A computational process that
• runs autonomously,
• on decentralized infrastructure,
• with resource manipulation.
“DAO: Decentralized Autonomous Organization”
It’s code that can own stuff!
Aka “good computer virus”
15. “Life Form”
-Ralph Merkle
“Bitcoin is the first example of a new form of life.”
“It lives and breathes on the internet. It lives because it can pay
people to keep it alive. It lives because it performs a useful
service that people will pay it to perform. … It can’t be stopped.
It can’t even be interrupted. If nuclear war destroyed half of
our planet, it would continue to live, uncorrupted.”
24. Realization: Tokenized Ecosystems
Are a Lot Like Evolutionary Algorithms!
What Tokenized ecosystem Evolutionary Algorithm
Goals Block reward function
E.g. “Maximize hash rate”
Objective function
E.g. “Minimize error”
Measurement
& test
Proof
E.g. “Proof of Work”
Evaluate fitness
E.g. “Simulate circuit”
System agents Miners & token holders (humans)
In a network
Individuals (computer agents)
In a population
System clock Block reward interval Generation
Incentives &
Disincentives
You can’t control human,
Just reward: give tokens
And punish: slash stake
You can’t control individual,
Just reward: reproduce
And punish: kill
26. Steps in EA Design
1. Formulate the problem. Objectives,
constraints, design space.
2. Try an existing EA solver. If needed, try
different problem formulations or solvers.
3. Design new solver?
27. 1. Formulation of optimization problem
Objectives & constraints in a design space
28. 2. Try an existing EA solver. Does it converge?
31. Steps in Token Ecosystem Design
1. Formulate the problem. Objectives,
constraints, design space.
2. Try an existing building block. If needed, try
different formulations or EA solvers.
3. Design new building block?
32. 1. Formulate the Problem: [ex. Ocean]
Obj:
• Maximize supply of relevant data
Constraints = checklist:
• For priced data, is there incentive for
supplying more? Referring? Spam
prevention?
• For free data, “” ?
• Does the token give higher marginal
value to users vs. hodlers?
• Are people incentivized to run keepers?
• Is it simple? Is onboarding low-friction?
Who are stakeholders?
What do they want?
Objectives &
constraints
33. 2. Try Existing Patterns
1. Curation
2. Proofs of human or compute work
3. Identity
4. Reputation
5. Governance / software updates
6. Third-party arbitration
7. …
34. 2.1 Patterns for Curation
•Binary membership: Token Curated Registry (TCR)
•Discrete-valued membership: Layered TCR (like ALPS!)
•Continuous-valued membership: Curation Markets
•Hierarchical membership: each label gets a TCR
•Work tied to membership: Curated Proofs Market
35. Key Question 1 2 3 4 5
For priced data: incentive for supplying more? Referring? ✖ ≈ ✔ ≈ ≈
For priced data: good spam prevention? ≈ ✔ ✔ ✔ ✔
For free data: incentive for supplying more? Referring? ✖ ≈ ✖ ✔ ✔
For free data: good spam prevention? ≈ ✔ ≈ ✔ ≈
Does token give higher marginal value to users of the
network, vs external investors? Eg Does return on capital
increase as stake increases?
✔ ✔ ✔ ✔ ✔
Are people incentivized to run keepers? ≈ ≈ ✔ ✔ ✔
It simple? Is onboarding low-friction? Where possible, do we
use incentives/crypto rather than legal recourse?
✔ ✔ ≈ ≈ ✔
2. Try existing patterns: evaluate on objectives &
constraints. [Ex Ocean: None passed…]
36. Key Question 1 2 3 4 5 6
For priced data: incentive for supplying more? Referring? ✖ ≈ ✔ ≈ ≈ ✔
For priced data: good spam prevention? ≈ ✔ ✔ ✔ ✔ ✔
For free data: incentive for supplying more? Referring? ✖ ≈ ✖ ✔ ✔ ✔
For free data: good spam prevention? ≈ ✔ ≈ ✔ ≈ ✔
Does token give higher marginal value to users of the
network, vs external investors? Eg Does return on capital
increase as stake increases?
✔ ✔ ✔ ✔ ✔ ✔
Are people incentivized to run keepers? ≈ ≈ ✔ ✔ ✔ ✔
It simple? Is onboarding low-friction? Where possible, do we
use incentives/crypto rather than legal recourse?
✔ ✔ ≈ ≈ ✔ ✔
3. Try new patterns: evaluate on objectives &
constraints. [Ex Ocean: pass!]
37. Simulation of Tokenized Ecosystems?
• Q: How do we design computer chips? ($50M+ at stake)
• A: Simulator + CAD tools
• Q: How are we currently designing tokenized
ecosystems? ($1B+ at stake)
• A: By the seat of our pants!
• Which means we might be getting it all wrong!
What we (desperately) need:
1. Simulators: agent-based systems [Incentivai, ..]
2. CAD tools: for token design
38. Design of Tokenized Ecosystems
From Mechanism Design to Token Engineering
Analysis: Synthesis:
Game theory Mechanism Design
Optimization Design
Practical
constraints
Engineering theory,
practice and tools
+ responsibility
Token Engineering for Analysis & Synthesis
40. “An AI running on decentralized processing substrate”
<or>
“A DAO running with AI algorithms”
Definition of AI DAO
41. The ArtDAO
1. Run AI art engine to generate new image,
using GP or deep learning
2. Sell image on a marketplace, for crypto.
3. Repeat!
42. 1. Run AI art engine to generate new image,
using GP or deep learning
2. Sell image on a marketplace, for crypto.
3. Repeat!
<Over time, it accumulates wealth, for itself.>
The ArtDAO
43. 1. Run AI art engine to generate new image, using
GP or deep learning
2. Sell image on a marketplace, for crypto.
3. Repeat!
<Over time, it accumulates wealth, for itself.>
<It could even self-adapt: genetic programming>
The ArtDAO
46. AI DAO Arch 3: Swarm Intelligence
Many dumb agents with emergent AI complexity
47. Angles to Making AI DAOs
• DAO → AI DAO. Start with DAO, add AI.
• AI → AI DAO. Start with AI, add DAO.
• SaaS → DAO → AI DAO. SaaS to DAO, add AI
• Physical service → AI DAO
49. Evolving the ArtDAO
Market
Art work
Code
Level to adapt at
How to
adapt
Human-based adapt at the code level.
Here, humans put in new smart contract
code (and related code in 3rd party
services), to improve ArtDAO’s ability to
generate art and amass wealth.
50. Evolving the ArtDAO
Market
Art work
Code
Level to adapt at
How to
adapt
Auto adapt at the market level.
It creates more of what humans buy, and
less of what humans don’t buy.
51. Evolving the ArtDAO
Market
Art work
Code
Level to adapt at
How to
adapt
Auto adapt at the art-work level.
Here, a human influences the creation of an
artifact. For example, it presents four variants of
a work, and a human clicks on a favorite. After
10 or 50 iterations, it will have a piece that the
human likes, and purchases.
52. Evolving the ArtDAO
Auto adapt at the code level.
Here, the ArtDAO modifies its own code, in hopes of improving.
• It creates a copy of itself, changes that copy’s code just a little bit, and gives a tiny bit of
resources to that new copy.
• If that new copy is bad, it will simply run out of resources and be ignored.
• But if that new copy is truly an improvement, the market will reward it, and it will be able
to amass resources and split more on its own.
• Over time, ArtDAO will spawn more children, and grandchildren, and the ones that do well
will continue to spread. We end up with a mini-army of AI DAOs for art.
• If buyers are DAOs too, it’s a network of DAOs, leading to swarm intelligence
Market
Art work
Code
Level to adapt at
How to
adapt