this is the original document that started it all (back in 2008), and it's basically a late night braindump so take it with some salt. We didn't spend time fixing or correcting things. It is what it is as they say.
Primer for a new Internet
On a new Zero-Conﬁguration, Meshed, Ad-Hoc Wireless Network,
Distributed Content and Anonymous Interaction
Let’s #ackin’ rock!
The idea behind ProtoNet is to create a next generation of networks, independent, anony-
mous, local and basically uncontrollable both in size and content. It is -boldly said- the rebirth
of the internet idea with means not available ten years ago.
Just as the World Wide Web was designed to be a tool to connect scientist and allow them
share information, ProtoNet follows the same basic idea, to allow a free ﬂow of information
from a user to any other user where a connection between these two is possible (or probable).
As the grip of big companies, governments, and NGOs gets stronger on the inherently inse-
cure and - by design - monolithic infrastructure of the web - and as technology (specially in the
realms of wireless communications) evolves and becomes more eﬃcient, cheaper and more
widespread, an opportunity arises for so called information societies to once and for all break
free from their dependence on carriers, governmental and non-governmental information mo-
nopolies and to create their own - independent - information sharing networks using basic
The hope is that, with a return of a free ﬂow of information - and the inherent gain of privacy,
control and personal responsibility - more drastic changes in the way we do business, learn and
share things will be possible. I personally believe that it can - and probably will be a catalyst for
progress in all ﬁelds of human interest*.
I hope you'll enjoy reading this document as much as I had fun writing it (not much really but
someone had to). Any feedback, questions, critiques are very welcome.
Proto: A combining form preﬁx signifying ﬁrst, primary, primordial; as, primitive in form; protoplast, a
primordial organism; prototype, protozoan. (ORIGIN #om the greek protos ‘ﬁrst’)
Protoplasm: The viscid and more or less granular material of vegetable and animal ce's, possessed of vital
properties by which the processes of nutrition, secretion, and growth go forward; the so-ca'ed " physical
basis of life;" the original ce' substance, cytoplasm, cytoblastema, bioplasm sarcode, etc.
As nature passed through its countless years of evolution it created excellent examples of what
would be needed to design the ProtoNet architecture. From stem cells to basic nervous sys-
tems, the Protonet idea is to - wherever needed - ﬁnd natural designs and use their paradigms
(expanded to the technical world) to solve our problems. While there are several similar ideas,
our approach allows us to create a drastically diﬀerent implementation. We don't need to
spend resources on how to monetize this idea (though there are ways to do that shown in its
relative section in this document), or on implementing a billing system. We don't need to
spend resources on ﬁnding a way of controlling the net, as it is expected to be uncontrollable.
We don't need to ﬁnd a way to implement a darknet (a trusted peers only network) as it can be
easily implemented on top of the ptn protocol by interested parties.
Most importantly the goal is not to create a new distribution system for the existing internet
(even though our ﬁrst iteration will certainly be a hybrid system) as ProtoNet is supposed to
replace (at least on locality) the existing infrastructure, which allows us to approach any up-
coming challenges from a completely diﬀerent perspective.
But this part is about architecture, so here it is (it is a rough draft so bear with me):
Before we continue let us have a quick look at the current wireless internet infrastructure:
The standard wiﬁ setup is basically just a way to wireless allow connectivity to the internet
provided by your telephone or broadband provider. what it basically does is securely (or some-
times not so securely - see WEP) relay your wireless stations internet requests to the internet
provider and relay its answer back to the requesting wireless station.
ProtoNet Phase 1, the basic idea and the introduction of the hybrid system:
The Idea behind protonet is easy: use the existing tightly weaved network of individual inter-
net and network users and their respective wiﬁ capabilities and utilize this infrastructure to
create the next-generation internet. Sounds awesome, but what does that really mean?
Well, let's look at big city neighborhoods: every building has now around 2-10 wiﬁ networks
created by residents to use their respective broadband access wirelessly.
Now imagine what would happen if all those single access points would start to interconnect.
We'd have a gigantic city wide network of interconnected wiﬁ nodes, practically covering every
corner of the city **.
Now add some storage to every one of those nodes, and suddenly the network becomes eﬀec-
tively a gigantic ﬁle area network. What you would look at would then be both an infrastruc-
ture for data messaging, data retrieval and data storage. Basically what the current internet is
without the need for broadband access providers (as network traﬃc would be transferred - at
best - at the wireless standard speed (currently 50Mbps, soon to be over 100Mbps and later in
areas of 1Gbps)) or centralized hosting services.****
However, what can be described in a few lines in this introduction is quite a challenge in the
real world. As the goal is to ensure a successful penetration into key markets the need arises to
create an easy, low complexity migration scenario, preferably giving the user the feeling that
when buying or installling our product he keeps everything he has now and is rewarded with a
few new, great features. In fact we don't want to just give him this feeling, we want the actual
product to deliver exactly that.
So to ensure a successful product we will start with a simpler, easy to implement solution that
would - when we're ready - allow the system to grow and evolve into what it is supposed to be.
The hybrid can be described in simple terms, it is a system that - without hassle - allows you to
use the internet as you do now (surf, share, play etc.) but also automatically connects you to
your neighboring nodes, has storage and understands the protonet protocol. It has just a few
additional key technical features over the current router/ap implementation and would thus
strategize the roll out as a disruptive technology, basically vertically - at ﬁrst - extending and
ﬁnally replacing the current technology.
What it means is that while, when you're surﬁng, downloading or chatting you won't feel the
slightest diﬀerence, it will have taken the ﬁrst step, namely creating the infrastructure, to a
whole new generation of applications and communications. All your request on current inter-
net services will continue to run through your broadband provider, but the moment you surf to
your ptn page (http://proton, or ptn://start or ... or actually use a program explicitly communi-
cating over the ptn protocol) you will be communicating directly with your peers. You will see
users connected to other nodes nearby, you can contact them, share ﬁles, have a videochat with
them or just browse the protonet neighboorhod (which actually is just a mirror of your real
world neighbourhood) for new content.
But it doesn't stop here. One of the basic features of this ptn router/ap is the ability to allow
and manage trusted and anonymous users out of the box. While most of the current wiﬁ access
points are encrypted or closed for anonymous use, protonet encourages not only sharing the
ptn network but also your broadband connection within reasonable limits. And this sharing is
rewarded: by sharing your ptn *and* broadband access you get access to other shared broad-
band accesses for ﬁle downloads or uploads (this feature will be discussed later on). Both these
features are enabled by default, a Quality of Service conﬁguration makes sure that trusted users
of the current node always get higher priorities on all connections (while a need for higher pri-
ority on protonet communications might be debatable).
trusted clients anonymous clients
Content and applications on the ptn:
As described above each node has data storage capabilities (starting at a single disk 250gb (as
of today) conﬁguration, to a RAID systems with higher storage capabilities and easily replace-
able mediums), it also comes with a couple of applications:
- the protonet dashboard displaying several information tidbits on the status of the network
(currently connected clients, both trusted and anonymous, their transfer rates, also informa-
tion on nodes and further levels of connected nodes).
- a bonjour web-based chat client, allowing you to chat to all active clients in the neighbor-
- a conﬁguration utility allowing you to allocate diskspace to trusted clients, set node informa-
- possibility for anonymous users to run a limited number of sanboxed web-applications (think
of blogs, link lists or similar)
How disk space is managed:
Remember that part of the main idea is to create a large distributed storage facility, thus 80%
of the installed storage is by default open to all users, every user can both upload and download
ﬁles, a basic api allows other nodes to list ﬁles and their respective metadata. As the openness
of the storage space would - without the necessary precautions - be highly susceptible to dis-
turbing interferences, a few basic rules would need to be implemented, there will be a single
ﬁle size limit (to ensure that single users don't take up all the space), ﬁles will have speciﬁc life
spans (smaller ﬁles have longer life spans, larger ﬁles have smaller life spans, life spans depend
also on popularity of ﬁles, popular ﬁles live longer, unpopular ﬁles will be kept until the space is
needed and their respective lifespan has ended), and several other network health mechanism
will be detected to counteract unhealthy usages (the health of the network is not dependent on
content, but on usage pattern, ﬂooding the network with unpopular ﬁles is one of those pat-
Even though the default conﬁguration is hassle-free (easy) and allows everyone to drop ﬁles
there, the possibility is given for any user to request secured disk space from the owner of the
node. While the author assumes that most users will never change the default conﬁguration,
the node owner has full control of his disk space at any time, he can change the freely available
disk space, he can give secured access (meaning no lifespan and size restrictions) and other
privileges to any user he marks as trusted and also at any given time remove them again. He is
essentially the ruler of his node if he wants it to be so.
The idea of control and responsibility:
While every node owner can superimpose his control on his node and its relevant functions,
the basic modus operandi assumes that only a small fraction will take the time to control both
the stored content and its users. This would in turn allow a anonymous content distribution
system ... Also, with the possibility of control comes responsibility. Let's assume that at a given
point in time, the storage space has been fully used and is not expiring anytime soon. If an user
would then want to put content online he couldn't. Unless he assumes responsibilty and de-
cides that his want is important enough to realize it: it would mean either adding storage space
to the network by adding his own node, getting secured disk space from a node owner, facili-
tating a node update through a node owner or ﬁnding another way to achieve his goal. The
Idea is that the demand and supply of this ecosystem will dynamically balance itself.
One of the basic functions of the protonet infrastructure will be its ability to heal itself, e.g.
detect and counteract unhealthy nodes and behaviors.
Excessive traﬃc will be punished, or better said, excessive and unhealthy nodes are being cut
oﬀ from the system until they are healthy again, ﬁrst for a short period allowing the owners to
ensure a return to healthy usage and then for longer and longer periods. Unhealthy usage will
be counteracted unpopular content however not, your last resort will always be to add your
own node if no one is letting you use their disk space.
As time moves on heterogeneous implementations of nodes will be highly encouraged as the
idea of a heterogenic network makes the network much more hardened and unsusceptible to
attacks and unhealthy usages...
How to ﬁnd content and node mutation:
One of the main challenges is to ﬁnd a distributed solution to searching and ﬁnding content,
the protonet search mechanism relies on a couple node mutations, nodes become indexer
nodes, or even higher lever super search nodes, indexer nodes are basically indexing other
nearby nodes for contents (from ten to up to several hundred nodes) and indexed nodes know
they have been indexed by whom, in absence of a super search node searches are being handled
by those, they have a simple api interface. If you have special content not hosted by the stan-
dard node api you will have the possibility to subscribe an indexing node to your content (how-
ever only a very simple exchange protocol will be implemented - your host will need to deliver
a simple xml ﬁle for indexing, reindexing periods can be set by you, for redundancy the indexer
node will also make sure that at least another indexer node receives your xml for indexing, un-
availability will lead to a deletion of your index after a period of time, you will however be able
to readd yourself to the indexer node).
Now for more thorough searches, nodes having the processing power and storage space will
become super search nodes, they will be a second layer on top the indexer nodes, being able to
search through the indexes of several indexer nodes at once, they basically mirror the indexing
skizze of nodes, indexer nodes and super search nodes.
(awesome drawing skills of the author at display)
Of course your dashboard always tells you what your node is currently doing and what good it
is doing for the community and the protonet.
On a meta level the idea is to mostly have a network of trust distribution of content, mostly
blog based as it now for most illegal and legal downloads, meaning someone publishes an index
on the best movies, and makes sure those are linked and available on the network. Or someone
has an blog you like to subscribe to.
How to ensure a fast and secure distribution of that content and node mutation:
Another node mutation is the routing (routemap) node, routing nodes will essentially cache
and store eﬃcient routes through the network, for anything that is farther than two hops (tbd),
they can also suggest or become highway nodes, basically a subnetwork of nodes allowing
larger traﬃc to ﬂow eﬃcently from one part of the network to another.
How to localize and node mutation (implementation and need uncertain):
Another node mutation is the lighthouse node, it needs some outside administering but will
allow for its owner to set its current longitude lattidude position, this will be used to check the
current network topology against the real life mapping and verify and modify when needed the
current topology and addressspace.
As a side note: all nodes contain all the code for all diﬀerent node mutations.
How to do maintenance on the network:
This may be one of the most important parts of the system, its maintenance. There will not be
(beyond the implementation and testing phase) a possibility for auto-updates. Firmware or
software updates need to be done with direct hardware access, no over the net updates will be
possible. No external ssh root access will be possible (unless of course you have your own node
implementation). External access poses an immense problem to network safety and security as
such they will not be permitted on standard user nodes. We might even implement a failsafe
ﬁrmware restore that is completely redundant.
Now the question arises: if no on the air updates are allowed how are they done? Well, pretty
simple: through a network of trusted - let's call them super users, if have to ﬁnd one near you
you trust that updates your machine, or you do it yourself. We'll make sure that it's reason-
ablely easy to ﬁnd one on the protonet. So even the networks update mechanism works on a
network of trust basis.
The Doing of Stuﬀ:
Now that we have some common ideas about the protonet network and how it is supposed to
work let's have a birds eye look at its ﬁrst implementation.
Working by example, the ﬁrst nodes:
The basic setup will be some mini-itx systems running some unix/linux ﬂavor, ﬁrst step will be
a two nodes setup, with implementation of basic communications and APIs needed for basic
protonet functionality, also trusted and anonymous usage tests, performance checks on the in-
stalled QoS system and ensuring it keeps most of the router functionality intact.
Next iteration is a three point setup with the ﬁrst node mutations implemented (testing index-
ing, supernodes etc.). When a reliable three point test system is in place there will be a 0.1
network rollout, the three access points will be placed in the immediate surrounding of the our
current home location for testing, this will be the basic testing grounds for this operation. We
will contact persons and homeowners we believe to be essential in their positions to ensure
When the ﬁrst three nodes are running basic tests are needed, ﬁle transfer speed, implement-
ing the basic version of the dashboard, getting data on wireless coverage and coverage quality.
Trying out the ﬁrst node mutations, testing automatic on the air updates (as said above this
functionality will later be removed, it will only be used during the developement phases of the
project). After having a running three point system and having successfully initiated the ﬁrst
node mutations we will add more nodes to the system, trying to cover a larger area. It will be
essential to monitor the user interactions and wether they will adopt the dashboard strategy,
maybe we will have to implement someway for them to easily remember the dashboard ad-
dress. a couple of basic applications will the be distributed to the nodes, the ﬁle upload/
download functionality, and a basic bonjour like chat system.
At this point we might setup a central user conﬁguration and settings server whose main pur-
pose will be to centrally store a users access data, his contacts and maybe his bookmarks,
wether this should be kept like that forever is deﬁnitely debatable. Personally I would prefer
that everybody carries his own proﬁle with himself and thus will be able to access all his serv-
ices, have his own bookmarks, contacts etc...
We will the continually add nodes and continue the networks development, do our ﬁrst hack
attack scenarios (if that doesn't happen by itself) and continuously monitor the usage of the
system. The following steps will be dependent on our on site situation assessment, wether
there will be a continuing developing in that direction or a rethinking of the system is needed.
Beyond the ﬁrst nodes:
Protonet Cluster HH-OTTENSEN
Protonet Cluster HH-ALTONA Protonet Cluster HH-OTTENSEN
Protonet Cluster HH-ALTONA
Protonet Cluster Berlin Protonet Cluster Berlin
Protonet Cluster HH-OTTENSEN
Protonet Cluster HH-ALTONA
Protonet Cluster HH-ALTONA Protonet Cluster HH-OTTENSEN
Protonet Cluster HH-HORN
Protonet Cluster HH-HORN Protonet Cluster Berlin
Protonet Cluster Berlin
How you can help:
1. Code, though there is a hardware platform prototype running (ubuntu hardy,
on intel atom hardware) the protocol doesn’t work as of now. Languages will be
C/C++ or similar for the low-level functionality and Ruby for glue code. Ruby on
Rails or Merb (or an alternative small-footprint Ruby framework) along with JS
will be used for the front-end needs.
2.Hardware / Money for Hardware, the idea is to get up a mesh of 10-15 running
nodes when the basic stuﬀ works (the hardware should be identical to the pro-
totype) and then extend it to 500 nodes by March 2009 each node costs ap-
proximately 200$ to build as of today.
3. Find some way to ﬁnance this operation, through risk capital, donations, selling
of nodes for cash or ?