This is part of an introductory course on Big Data Tools for Artificial Intelligence. These slides introduce students to the new in-memory cluster computing named Spark.
Real-Time Anomaly Detection with Spark MLlib, Akka and CassandraNatalino Busa
We present a solution for streaming anomaly detection, named “Coral”, based on Spark, Akka and Cassandra. In the system presented, we run Spark to run the data analytics pipeline for anomaly detection. By running Spark on the latest events and data, we make sure that the model is always up-to-date and that the amount of false positives is kept low, even under changing trends and conditions. Our machine learning pipeline uses Spark decision tree ensembles and k-means clustering. Once the model is trained by Spark, the model’s parameters are pushed to the Streaming Event Processing Layer, implemented in Akka. The Akka layer will then score 1000s of event per seconds according to the last model provided by Spark. Spark and Akka communicate which each other using Cassandra as a low-latency data store. By doing so, we make sure that every element of this solution is resilient and distributed. Spark performs micro-batches to keep the model up-to-date while Akka detects the new anomalies by using the latest Spark-generated data model. The project is currently hosted on Github. Have a look at : http://coral-streaming.github.io
Using Anaconda to light up dark data. My talk given to the Berkeley Institute of Data Science describing Anaconda and the Blaze ecosystem for bringing a virtual analytical database to your data.
An introduction to elasticsearch with a short demonstration on Kibana to present the search API. The slide covers:
- Quick overview of the Elastic stack
- indexation
- Analysers
- Relevance score
- One use case of elasticsearch
The query used for the Kibana demonstration can be found here:
https://github.com/melvynator/elasticsearch_presentation
This is part of an introductory course on Big Data Tools for Artificial Intelligence. These slides introduce students to the new in-memory cluster computing named Spark.
Real-Time Anomaly Detection with Spark MLlib, Akka and CassandraNatalino Busa
We present a solution for streaming anomaly detection, named “Coral”, based on Spark, Akka and Cassandra. In the system presented, we run Spark to run the data analytics pipeline for anomaly detection. By running Spark on the latest events and data, we make sure that the model is always up-to-date and that the amount of false positives is kept low, even under changing trends and conditions. Our machine learning pipeline uses Spark decision tree ensembles and k-means clustering. Once the model is trained by Spark, the model’s parameters are pushed to the Streaming Event Processing Layer, implemented in Akka. The Akka layer will then score 1000s of event per seconds according to the last model provided by Spark. Spark and Akka communicate which each other using Cassandra as a low-latency data store. By doing so, we make sure that every element of this solution is resilient and distributed. Spark performs micro-batches to keep the model up-to-date while Akka detects the new anomalies by using the latest Spark-generated data model. The project is currently hosted on Github. Have a look at : http://coral-streaming.github.io
Using Anaconda to light up dark data. My talk given to the Berkeley Institute of Data Science describing Anaconda and the Blaze ecosystem for bringing a virtual analytical database to your data.
An introduction to elasticsearch with a short demonstration on Kibana to present the search API. The slide covers:
- Quick overview of the Elastic stack
- indexation
- Analysers
- Relevance score
- One use case of elasticsearch
The query used for the Kibana demonstration can be found here:
https://github.com/melvynator/elasticsearch_presentation
Main actor of this alchemic wonder will be a peculiar substance called "R". Some peasants describe "R" as "a software environment for statistical computing and graphics" – but they'll get burned at the stakes anyway...
Every company generates various kinds of data. Let it be accounting data, time records of employees, quality assurance sensor data ... or anything else.
Most of this data just exists and your company doesn't profit from it. I'd like to show you how to get started making more out of these hidden treasures using R.
We'll start with a very quick introduction to R. I will show you how R basically works, how it can be compared to Excel and why it will speed up your journey of transforming data into gold.
Equipped with a basic understanding of R I will take you on an expedition to what you can do with R – especially in combination with several other open source tools. Two very interesting tools in combination with R are LaTeX and Freeboard.
LaTeX allows you to generate beautiful looking reports based on your data.
Freeboard is an open source dashboard software that can access and present data of various sources.
By means of real world examples I will demonstrate you how different kinds of reports are created at dkd to support our team and management in their controlling tasks.
You've seen the basic 2-stage example Spark Programs, and now you're ready to move on to something larger. I'll go over lessons I've learned for writing efficient Spark programs, from design patterns to debugging tips.
The slides are largely just talking points for a live presentation, but hopefully you can still make sense of them for offline viewing as well.
Greg Hogan – To Petascale and Beyond- Apache Flink in the CloudsFlink Forward
http://flink-forward.org/kb_sessions/to-petascale-and-beyond-apache-flink-in-the-clouds/
Apache Flink performs with low latency but can also scale to great heights. Gelly is Flink’s laboratory for building and tuning scalable graph algorithms and analytics. In this talk we’ll discuss writing algorithms optimized for the Flink architecture, assembling and configuring a cloud compute cluster, and boosting performance through benchmarking and system profiling. This talk will cover recent developments in the Gelly library to include scalable graph generators and a mixed collection of modular algorithms written with native Flink operators. We’ll think like a data stream, keep a cool cache, and send the garbage collector on holiday. To this we’ll add a lightweight benchmarking harness to stress and validate core Flink and to identify and refactor hot code with aplomb.
Talk about Exploring the Semantic Web, and particularly Linked Data, and the Rhizomer approach. Presented August 14th 2012 at the SRI AIC Seminar Series, Menlo Park, CA
The traditional process of achieving metadata standards has failed, and I know what I’m talking about because of Dublin Core, BagIt, Z39.50, URLs, and ARKs.
We must think outside the box or we will keep failing. YAMZ (Yet Another Metadata Zoo) is not a standard. Instead it is a dictionary of terms, some fixed and others still evolving, that are meant to be selectively referenced by future standards. Terms are otherwise decoupled from standards that reference them. Each term is a kind of nano-specification with a unique persistent identifier that tracks the term from evolving to mature to deprecated.
More Related Content
Similar to The ARK Identifier Scheme at Ten Years Old
Main actor of this alchemic wonder will be a peculiar substance called "R". Some peasants describe "R" as "a software environment for statistical computing and graphics" – but they'll get burned at the stakes anyway...
Every company generates various kinds of data. Let it be accounting data, time records of employees, quality assurance sensor data ... or anything else.
Most of this data just exists and your company doesn't profit from it. I'd like to show you how to get started making more out of these hidden treasures using R.
We'll start with a very quick introduction to R. I will show you how R basically works, how it can be compared to Excel and why it will speed up your journey of transforming data into gold.
Equipped with a basic understanding of R I will take you on an expedition to what you can do with R – especially in combination with several other open source tools. Two very interesting tools in combination with R are LaTeX and Freeboard.
LaTeX allows you to generate beautiful looking reports based on your data.
Freeboard is an open source dashboard software that can access and present data of various sources.
By means of real world examples I will demonstrate you how different kinds of reports are created at dkd to support our team and management in their controlling tasks.
You've seen the basic 2-stage example Spark Programs, and now you're ready to move on to something larger. I'll go over lessons I've learned for writing efficient Spark programs, from design patterns to debugging tips.
The slides are largely just talking points for a live presentation, but hopefully you can still make sense of them for offline viewing as well.
Greg Hogan – To Petascale and Beyond- Apache Flink in the CloudsFlink Forward
http://flink-forward.org/kb_sessions/to-petascale-and-beyond-apache-flink-in-the-clouds/
Apache Flink performs with low latency but can also scale to great heights. Gelly is Flink’s laboratory for building and tuning scalable graph algorithms and analytics. In this talk we’ll discuss writing algorithms optimized for the Flink architecture, assembling and configuring a cloud compute cluster, and boosting performance through benchmarking and system profiling. This talk will cover recent developments in the Gelly library to include scalable graph generators and a mixed collection of modular algorithms written with native Flink operators. We’ll think like a data stream, keep a cool cache, and send the garbage collector on holiday. To this we’ll add a lightweight benchmarking harness to stress and validate core Flink and to identify and refactor hot code with aplomb.
Talk about Exploring the Semantic Web, and particularly Linked Data, and the Rhizomer approach. Presented August 14th 2012 at the SRI AIC Seminar Series, Menlo Park, CA
The traditional process of achieving metadata standards has failed, and I know what I’m talking about because of Dublin Core, BagIt, Z39.50, URLs, and ARKs.
We must think outside the box or we will keep failing. YAMZ (Yet Another Metadata Zoo) is not a standard. Instead it is a dictionary of terms, some fixed and others still evolving, that are meant to be selectively referenced by future standards. Terms are otherwise decoupled from standards that reference them. Each term is a kind of nano-specification with a unique persistent identifier that tracks the term from evolving to mature to deprecated.
YAMZ.net is a tool for taxonomy building. Metadata vocabulary standardization ranks among the most awful design-by-committee experiences, whether at the international standards level or at the working group level. We used a crowdsourced metadata dictionary with reputation-based voting, and in which every term gets a unique persistent identifier. In the second half, are exercises to see how it all works in practice.
Two themes
1. Proposed metadata for “persistence statements”
What you mean by persistence
Informing user linking choices
2. Metadata hardened in open yamz.net dictionary
Crowdsourced, but with reputation-based voting
Every term has a unique persistent identifier (PID)
The scheme of an identifier determines almost nothing about its behavior compared to a resolver that's ready to map it to various services. When resolver infrastructure is shared across schemes instead of siloed, all schemes benefit. With suitable prefixing dozens of well-known, so-called non-actionable schemes can become available from a single unified base URL. The idealized resolver would adopt a fully open infrastructure, and support all schemes and the best features from modern resolvers -- deduplication, content negotiation, link checking, inflections, suffix passthrough, etc.
A huge amount of incredibly diverse research data remains beyond the reach of internet search engines, peer review processes, and systematic cataloging. The ability by consumers to annotate data is an important mitigation, harnessing "the crowd" to make it easier for everyone to discover and re-use data.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
The ARK Identifier Scheme at Ten Years Old
1. The
ARK
Iden+fier
Scheme
at
Ten
Years
Old
7
M a y
2 0 1 2
J o h n
K u n z e
U n i v e r s i t y
o f
C a l i f o r n i a
C u r a + o n
C e n t e r
C a l i f o r n i a
D i g i t a l
L i b r a r y
2. California
Digital
Library
Serving
the
University
of
California
CDL
supports
the
research
lifecycle
• 10
campuses
• Collec+ons
• 360K
students,
faculty,
and
staff
• Digital
Special
Collec+ons
• 100’s
of
museums,
art
galleries,
• Discovery
&
Delivery
observatories,
marine
centers,
• Publishing
Group
botanical
gardens
• UC
Cura+on
Center
(UC3)
• 5
medical
centers
• 5
law
schools
• 3
Na+onal
Laboratories
4. Today’s
journey
• What
are
ARKs?
• Separa+on
of
concerns
• Naming
≠
hos+ng
• Scheme
≠
resolu+on
• Syntax
≠
persistence
• Inflec+ons
and
metadata
• EZID
(easy
iden+fiers)
and
N2T
(name-‐to-‐thing)
• Data
cita+on,
passthrough
5. What’s
an
ARK
iden+fier?
ARK
=
Archival
Resource
Key
ARKs
support
long-‐term
access
to
informa+on
objects
ARKs
iden+fy
objects
of
any
type:
• digital
objects
–
data,
documents,
images,
sodware,
...
• physical
objects
–
books,
bones,
statues,
...
• groups
&
living
beings
–
people,
animals,
orchestras,
...
• Intangibles
–
places,
chemicals,
diseases,
terms,
...
6. The
URL
is
dead,
long
live
the
URL!
Fallacy
#1:
URLs
are
unreliable,
so
instead
use
this...
um...
well...
ah
...
(shhh!)
“URL”
Some
of
your
best
friends
are
URLs:
hlp://dx.doi.org/10.1234/98765
hlp://hdl.handle.net/10.1234/98765
hlp://purl.org/10.1234/98765
hlp://n2t.net/ark:/101234/98765
7. Persistence
is
about
service
• Imagine
the
“perfect”
golden
iden+fier
• Apply
bankruptcy,
disk
crash,
human
error,
or
war,
and
there’s
nothing
that
syntax,
scheme,
or
resolver
can
do
to
prevent
iden+fier
breakage.
8. What’s
an
ARK
iden+fier?
(take
2)
An
ARK
is
a
URL,
with
some
extra
rules
ARK
reserves
/
and
.
for
what
we
oden
assume
• A/B/C
means
C
is
contained
in
A/B,
and
B
in
A
• A.pdf,
A.html,
and
A.docx
are
all
variants
of
A
Could
dras+cally
improve
search
result
display
• No
need
to
lookup
rela+onships
9. ARK
inflec+ons
(declina+ons)
An
ARK
is
a
special
URL
with
access
to
3
things
1. An
informa+on
object
2. Its
metadata,
by
appending
‘?’
inflec+on
3. A
provider’s
promise,
by
appending
a
‘??’
An
inflec1on
changes
a
name
ending
for
a
purpose
• Reduces
the
number
of
different
names
needed
• Use
seman+c
web
without
hiring
a
programmer
10. ‘?’
Inflec+on
returns
Dublin
Kernel
Same
machine-‐readable
informa+on
as
before:
erc:!
who: National Research Council!
what: The Digital Dilemma!
when: 2000!
where: http://books.nap.edu/html/digital%5Fdilemma!
Even
shorter:
erc: National Research Council!
| The Digital Dilemma | 2000 !
| http://books.nap.edu/html/digital%5Fdilemma!
See
hlp://dublincore.org/groups/kernel/
for
more
informa+on!
11. Why
use
ARKs?
ARKs
are
assigned
for
a
variety
of
reasons:
• affordability
–
there
are
no
fees
to
assign
or
use
ARKs
• self-‐sufficiency
–
can
host
ARKs
on
your
own
web
server
• portability
–
can
move
ARKs
without
change
of
iden+ty
http://cdlib.org/ark:/12025/654xz321
http://rutgers.edu/ark:/12025/654xz321
http://n2t.net/ark:/12025/654xz321
• global
resolvability
–
can
host
ARKs
at
N2T
resolver
• density
–
mixed
case
means
CD,
Cd,
cD,
cd
are
all
dis+nct
12. Some
unique
advantages
of
ARKs
• simplicity
–
uses
only
ordinary
"redirects”
&
"get"
requests
• versa+lity
–
with
"inflec+ons"
(different
endings),
an
ARK
should
access
data,
metadata,
promises,
and
more
• transparency
–
no
iden+fier
can
guarantee
stability,
and
ARK
inflec+ons
help
users
make
informed
judgments
• visibility
–
syntax
rules
make
ARKs
easy
to
extract
and
to
compare
for
containment
and
variant
rela+onships
• reserved
characters:
-‐
(hyphen),
/
(slash),
.
(period)
13. What’s
an
ARK
iden+fier?
(take
3)
ARK
is
a
collec+on
of
good
ideas
• Separates
scheme
syntax
from
resolver
rules
– Resolu1on
is
a
process
of
mapping
an
id
to
a
thing
• Separates
name
assigning
from
name
mapping
• All
schemes
encouraged
to
use
these
ideas,
even
ordinary
URLs
• N2T
resolver
can
support
them
for
any
scheme
14. Iden+fier
schemes
are
highly
parallel
Scheme
:
Name Mapping Authority : Name Assigning Authority
: (NMA) : : Number (NAAN)
v v v
|..........................|....+..................|
http://dx.doi.org/doi:10.30/tqb3kh97gh8w
http://hdl.handle.net/hdl:13030/tqb3kh97gh8w
http://purl.org/tqb3kh97gh8w
... urn:13030:tqb3kh97gh8w
http://n2t.net/ark:/13030/tqb3kh97gh8w
http://OwlBike.example.org/ark:/13030/tqb3kh97gh8w
|..........................|.......................|......
Branded or neutral Base identifier Suffix
15. Locksmith
jargon:
shoulder,
blade,
+p,
bow,
cover
_____ slips on _____
.-' ,_,'-.. ----> .-' '-.
/ (o,o) /
: {`"'} || : `____
/ .-. -"-"- || / .-. '--^. .^--^. .^.
{ ( ) || { ( ) `-' `-^--^-' '--^.
`-' _o || '-' ===================================}
: _|<,_ || : __________________________________/
(*)/(*) / /
`-._____.-' `-._____.-'
|....................|...............|....|..........................|..|
^ ^ ^ ^ ^
: : : : :
Cover= Bow= Shoulder .------ Blade Tip
NMA Scheme+NAAN : : .-------------------'
: : : : : :
v v v v v v
|..........................|....+.....|...|......|.|
http://OwlBike.example.org/ark:/13030/tqb3kh97gh8w <---- Example Key
doi:10.30/tqb3kh97gh8w with parallel
hdl:13030/tqb3kh97gh8w parts in other
urn:13030:tqb3kh97gh8w id schemes.
|..........................|.......................|....
Name Mapping Authority Base identifier ...
16. ARK
usage
in
10
years
• In
2001-‐2011
~100
organiza+ons
registered
for
ARKs
• Registry
is
replicated
at
BnF
and
NLM
• Some
of
the
largest
users
are
– The
California
Digital
Library
– The
Internet
Archive
– Bibliothèque
na+onale
de
France
– Por+co
Digital
Preserva+on
Service
– University
of
California
Berkeley
– University
of
Chicago
17. Some
other
ARK
registrants
12025
US
Na+onal
Library
of
Medicine
86077
Cornell
Ins+tute
for
Social
and
Economic
Research
26677
Library
and
Archives
Canada
77635
Humboldt-‐Universität
zu
Berlin
13038
World
Intellectual
Property
Organiza+on
78319
Google
61001
University
of
Chicago
28722
University
of
California
Berkeley
64269
UK
Digital
Cura+on
Centre
87895
Centre
Informa+que
Na+onal
de
l'Enseignement
Supérieur
61903
Family
Search
52327
Na+onal
Library
and
Archives
of
Quebec
10261
Jüdisches
Museum
Berlin
71479
Spanish
Na+onal
Research
Council
32833
Massachusels
Ins+tute
of
Technology
81055
Bri+sh
Library
80713
Biblioteca
Nacional
de
Portugal
18. Immersion
vs
landing
page
What
do
you
mean
by
“get
the
data”?
What
inflec+ons
might
dis+nguish
these?
• Immersion
–
a
consump+ve
experience
or
• Landing
page
–
a
menu-‐study
experience?
19.
20. Vision
for
a
“data
paper”
• Wrap
the
unfamiliar
in
a
familiar
façade
• A
“data
paper”
is
minimally
a
cover
sheet
and
a
set
of
links
to
archived
ar+facts
• Cover
sheet
contains
familiar
elements:
+tle,
date,
authors,
abstract,
and
persistent
iden+fier
(DOI,
ARK,
etc.)
• Just
enough
to
permit
basic
exposure
and
discovery
– Building
a
basic
data
cita+on
– Indexing
by
services
such
as
Web
of
Science,
Google
Scholar
– Ins+lling
confidence
in
the
iden+fier’s
stability
21. New
distributed
framework
Coordina9ng
Nodes
Flexible,
scalable,
Member
Nodes
• retain
complete
metadata
sustainable
network
•
catalog
ins+tu+ons
diverse
• subset
of
all
data
•
serve
local
community
• perform
basic
indexing
•
provide
network-‐wide
•
provide
resources
for
managing
their
data
services
• ensure
data
availability
(preserva+on)
• provide
replica+on
services
22. ARKs
–
coming
soon
• Community
forum
• Standardiza+on
as
an
Internet
RFC
• New
inflec+ons
for
landing
page
&
immersion
23. N2T/EZID
–
coming
soon
• Indexing
by
A&I
vendors
• Suffix
pass-‐through
– Register
Name
-‐>
target
T
– Resolve
Name/a/b/c
-‐>
T/a/b/c
automa+cally
– Greatly
reduce
number
of
ids
to
manage
• URNs