SlideShare a Scribd company logo
I Did Not Write This Document And Can Prove It! - A subjective view on why non-repudation is non-
existent
J. Oquendo
C|EH, CNDA, CHFI, OSCP
SGFA, SGFE, CSDP*
joquendo@e-fensive.net


Companies, lawyers, lawmakers, law enforcement agents, risk managers, security experts and
countless others have used the term “non-repudiation” for some time. Many would believe it to be
the “Holy Grail”, as proof positive that someone sent something and this fact cannot be disputed.
Rather than re-invent any (word) wheels, a little wikipedia snippet is in order:

Non-repudiation is the concept of ensuring that a party in a dispute cannot repudiate, or refute the
validity of sending information – be it an electronic statement or signature. Although this concept
can be applied to any transmission, including television and radio, by far the most common in the
computing realm, is the verification and trust of signatures.

According to traditional legal practice, a signature on a paper contract or memorandum may
always be repudiated by the signatory. Such repudiation may take one of two forms: The
signatory may claim fraud or forgery, such as "I did not sign that." Alternately, he/she may accept
the signature as authentic but dispute its validity due to coercion, as in the scenario of blackmail
or confessions given under torture. The legal burden of proof differs depending upon the
repudiation reason. In the former scenario the burden of proof typically rests on the party claiming
validity, while in the latter it shifts to the signatory claiming lack thereof. [1]

With technology being what it is, non-repudiation based disputes should be well thought through
from an argumentative standpoint. This statement mainly applies to those who depend on the
term “non-repudiation” for legal purposes, as I cannot see one losing sleep over their co-worker,
family member or someone who is not going to disaffect one's financial posture, taking a matter of
“non-repudiation” to court. However, what does refuting “non-repudiation” mean on the legal front?
How can one dispute the so called “indisputable”. Facts being what they are, one can dispute
“non-repudiation rather easily.

From the technology perspective, we know factually that attacks have risen over the years. We
can see that new attack vectors gain ground almost daily. Viruses, worms and all sorts of
malicious programs make their way into the most hardened systems. Everyone is vulnerable,
many are compromised. With this fact in mind, we can look at the most extreme form of “non-
repudiation” based communications and dispute this concept of “non-repudiation as well. In fact
outright repudiate most electronic forms of communication at will, with denial being either/or:
willfully or unknowingly. With that said, I will introduce (or re-introduce) you to PGP based
messaging, for which FFIEC states:

              A major benefit of public key cryptography is that it provides a
              method for employing digital signatures. Digital signatures enable
              the recipient of information to verify the authenticity of the
              information's origin, and also verify that the information is intact.
              Thus, public key digital signatures provide authentication and data
              integrity. A digital signature also provides non-repudiation,
              which means that it prevents the sender from claiming that he or
              she did not actually send the information. These features are
              every bit as fundamental to cryptography as privacy, if not more.
A digital signature serves the same purpose as a handwritten
              signature. However, a handwritten signature is easy to counterfeit. A
              digital signature is superior to a handwritten signature in that it
              is nearly impossible to counterfeit, plus it attests to the contents
              of the information as well as to the identity of the signer.


The fundamental flaw of that statement by FFIEC is that - an attacker need not counterfeit a
signature to repudiate sending a message. Why should they counterfeit a signature when they
can validly sign a message as a sender? Surely that would be irrefutable proof that a sender did
indeed send a message. Even if the key was created during a PGP key-signing party there is little
to validate the authenticity of the sender. "The agencies consider single-factor authentication, as
the only control mechanism, to be inadequate..." From the FFIEC "Authentication In an Internet
Banking Environment" [3]

A visual would look as follows [EFNR]. In this instance, a fictitious character named John Smith
created a PGP key in front of hundreds of users. He created a 4096 bit key so another fact based
snippet is in order: As of 2008, the largest (known) number factored by a general-purpose
factoring algorithm was 663 bits long (see RSA-200), using a state-of-the-art distributed
implementation. The next record is probably going to be a 768 bits modulus. RSA keys are
typically 1024–2048 bits long. Some experts believe that 1024-bit keys may become breakable in
the near term (though this is disputed); few see any way that 4096-bit keys could be broken in the
foreseeable future. [4]

We can allude to John Smith being the sole sender of any messages signed and sent by him
using the key he created – theoretically. Because many individuals prefer simplicity over
complexity, certain attacks – if I can even call them that – come to mind where this concept of
non-repudiation is mentioned. No matter what the key size is or security protecting information is,
it all gets thrown out the door when it comes to technology.

In most businesses, high powered executives have assistants under them. Think about this for a
moment. What happens when an executive who almost always is mandated to digitally “sign”
business based emails prior to sending shares his PGP keys? Most executives will likely have
their assistants sign on their behalf, and they will not know of the dangers involved with giving out
their password. In these businesses, it is very likely that executives have literally given away their
fingerprint in the form of passing to their assistants the pass phrase to their PGP keys: “Oh, you
need my fingerprint to send that email. Before you send it, might as well write this down so I won't
have to remind you. The pass phrase is MyFavoriteSportsTeam. Make sure you put it somewhere
you will always find it. Under your mouse pad, no one would ever stop to look there.” If you think
it doesn't happen, apparently you haven't work in many large environments.

Secondly, there is caching. For those unfamiliar: “a cache is a temporary storage area where
frequently accessed data can be stored for rapid access.” Many PGP-like programs have options
available for caching. They do so for ease of use. “Remember my password for n amount of
minutes so I won't have to re-type it”. Much has been written on cache attacking, timing attacks
(TOCTTOU), so there is no need for a barrage of snippet explanations on these attacks. These
attack vectors exist whether one realizes it or not. Thirdly, copy and pasting: We've already seen
the insanity of web pages being able to access what is on the clipboard–you know, the place
where things get stored when you copy them. [5] Finally, there are malicious programs, many of
which are known to upload keystroke loggers to a machine. In light of this fact, we observe the
following:
In the pictured instance [EFNS], from the outside perspective, John definitively sent a threatening
e-mail. It's obvious as hundreds of witnesses were present when he created a PGP key. In fact,
hundreds signed his PGP key so there is no way he could “not” have sent that message. Fact:
There is no method to counterfeit a key. Fact: An attacker need not duplicate a key to forge an e-
mail and sign it provided the attacker has access to the password. Facts being what they are, we
can see the possibility of John Smith not even sending a threatening message - contrary to the
pulpit pounding “guilty-as-charged” non-repudiation charges lobbed at him. So the question would
then become: “Why would someone do such a thing” which leads to a brief introduction on a “Joe
Job” [6], or in military terms, the false flag [7]:

              “False flag operations are covert operations conducted by
              governments, corporations, or other organizations, which are
              designed to deceive the public in such a way that the operations
              appear as though they are being carried out by other entities”

The Internet can - and has proven at times to be - a wild west. A place where anything goes. We
have read about “cyber-bullying” and “cyber-warfare,” and the list goes on. So “cyber-framing”
should not be any different. In fact, it should be looked into rather than being passed off. Facts are
facts, whether we accept them or not. This concept of refuting non-repudiation is not anything
new. In fact, the concept was re-introduced to me while I sat in Dulles Virginia by a great instructor
I had while taking CISM (Certified Information Security Manager) courses. (Thanks Larry) Captain
Kirk voice: “Spock... Can it... be... that... you did not.... send... that...message?!”

While I had previously written an article in the early part of this millenium called “Framing Private
Ryan” that meticulously detailed the dangers involved with “cyber-framing” and electronic attack
vectors, the purpose of the initial write-up was to stir up conversation about the validity of
electronic data as a source of “evidence”. For matters known to me and to those that know me,
I'm all for accountability - so much so now - more than I ever was. I know first-hand how one's life
can be turned upside down by those who don't really understand technology, and the dangers
associated with data being used as evidence. In dealing with explaining technology, technology is
far more prone to tampering than many actually realize and it can be a trial in itself sorting out the
truth. Hence, here it is eight years later and again I am re-iterating the need for people to re-visit
this topic.

In the years prior to “e-signatures” it would have been difficult to dispute the fact that someone
signed a document or came into agreement on, say a contract, since there was a physical
element involved. It was “your word against someone else’s.” With electronic information,
however, it is not as clear cut. We can assume that John Smith sent a message; in fact, most
evidence will tell us this. Yet if we dig deeper into the electronic forensics scope, we can prove
factually that, while someone could not have possibly counterfeited a PGP key, we can still
repudiate sending a message.

There is little to argue on the point of breaking encryption however, since many are still taking the
wrong approach from an argumentative standpoint – we know crypto cannot be broken - so it
would be absurd to question the legitimacy of non-repudiation from a technological standpoint
when the question is “can you break crypto?” Especially when the person on the stand is holding
a PhD and is stating: “The cryptography goals are: privacy or confidentiality, data integrity,
authentication and non-repudiation.” [8] Who are we to argue? It is one Subject Matter Expert
versus another. But don’t believe everything you see, especially when it comes to electronically
obtained information.
Electronic information is and will continue to be insecure no matter what safeguards we choose to
use in order to protect them from disclosure. I havve read countless books, request for
comments, white papers and arguments on the subject yet everyone seems to point back to
technology solving another technological problem. It is sort of difficult to fathom given that a
computer can only output what one inputs. My personal favorite acronym for the moment is
“CBVA,” which means Complexity Based Vulnerability Analysis. Sounds mystifying--wouldn’t you
agree? But, when looking at that explanation closely, the end result will always bring us back to
square one, there is little guarantee:

               One goal of our effort has been the development of complexity-based
               vulnerability analysis (CBVA), utilizing a complexity-based
               information assurance metric for vulnerability analysis. The metric
               proposed is based upon Kolmogorov Complexity. Computable
               estimates of this Kolmogorov Complexity have been indicated, as
               well as additional useful applications of Kolmogorov Complexity for
               communications in general. Unless vulnerabilities can be
               identified and measured, the information assurance of a system
               can never be properly designed or guaranteed. [9]


Scenario: On trial is John Smith for sending PGP signed death threats to a politician.

Prosecutor: Is say a 4096 bit PGP key vulnerable?
Expert: (Inserting random fluff mathematics which will bore a jury) ... in short, it is impossible.

Prosecutor: So you’re saying there is no method to counterfeit a PGP key--correct?
Expert: That’s correct.

Prosecutor: So it would be safe to say that any message that was signed with that key would
have been sent by whomever was in possession of that key?
Expert: That is correct

Prosecutor: Can I repudiate this?
Expert: Experts from all over the world have been trying to break these algorithms and have been
unsuccessful, so it's doubtful.

Prosecutor: So without reasonable doubt, you’re saying someone couldn’t have counterfeited
this “fingerprint” if you will?
Expert: This is correct, I’m absolutely certain.

In this instance of questioning from a prosecutor to an expert witness, the defendant seems to be
irrefutably guilty of sending the message. He obviously did not have his key counterfeited,
therefore he must have sent that threatening message. Imagine, if you can, your life thrown into
mayhem due to a situation like this. What would you do, who would you turn to, how would you
defend yourself? How would you protect yourself? “It's your DNA at the crime scene!” The
answers are not always clearcut, and sometimes there are no answers. Depending on
technology to solve other technological problems is foolish. Yet professionals continue to fool
themselves into believing the concept of non-repudiation, which brings us back to the statement in
the first paragraph of this page: “Unless vulnerabilities can be identified and measured...” How
can you offer a measurement in this scenario? Surely there would likely be more evidence
involved with a case like this; however, for those who work in the security realm, you should be
mindful of these situations. There are many reasons to keep a sharp eye out for all anomalies on
one’s network--no matter how insignificant they can seem. After all, your fingerprint may depend
on it. Anyone working in the forensics field should examine machines carefully for instances of
malware, breaches, viruses and the likes.

Getting back into the courtroom scenario, a counter-argument could work out as follows:

Defense: In order to create and sign this message, a user solely needs to provide a password for
sending a message - is this correct?
Expert: This is correct.

Defense: Can this password be guessed?
Expert: Possibly, although one should use a strong password to avoid this.


Defense: What if the password was compromised--could someone else have signed?
Expert: Absolutely.

Defense: So we can dispute non-repudiation, can’t we?
Expert: I suppose you can.

Defense: Are you aware of any trojans, malware, infections or anything similar in your years of
experience that has say, stolen PGP keys? [10, 11]
Expert: No I’m not aware.

Prosecutor: Objection, we can stipulate there have been ...
Defense: Defense rests.

Theoretically, all that is needed is reasonable doubt. At this point it is created. The introduction of
known threats, viruses, malware and the potential that this “could have” occurred should be
enough even to absolve someone who is guilty. A dual edged sword, wouldn't you say? Yet as
investigators, security professionals and forensic experts, we could have taken the due diligence
to check for this prior to an encounter like this.

Let us move more into the forensics arena here: The argument being either to prove or to
disprove that an infection on John Smith's machine caused a password to be compromised, which
led to the threat message being signed and sent. Reality being what it is, can we prove
statistically, beyond a reasonable doubt, that infectious programming is on the rise? Can we
correlate an infection to a password being compromised? As a forensic expert, have you
performed your due diligence, have you checked for this?

A thorough examination of a machine with an infection (malware, viruses, etc.) has the potential to
either make or break a case, given what I've already presented. You do not necessarily need to
break technology, especially one that cannot be broken and yet one can still circumvent it. So, for
those who study and read about non-repudiation and associate this dreaded term as being as
concrete as a fingerprint or DNA - “the silver bullet” - I say to you: “You might be thinking solely
with your keyboards and not using your brains here!”

Surely, from an offensive posture, arguing this case (for non-repudiation) would be a difficult one,
and very time consuming. Yet, in the event that an argument like this sprouted, questions that I
would immediately focus would be the attention paid to the state of the machine with regards to
malware, viruses, etc. Anyone in the forensics field will tell you that it is a difficult task capturing
registry states from a forensic standpoint. Even more so with malware, where it is becoming
much more difficult to detect and decipher it. What is the next move? Where do we go from
here? Should we really continue using the term “non-repudiation”?


              Four years ago, while pregnant, Ms. Amero went to work one day as
              a substitute teacher and left with felony charges against her.

              Her crime?

              Julie Amero was convicted of four felony counts, each count carrying
              a maximum of ten years, for exposing school children to
              pornography.

              The reality is that Julie, a 40 year–old, pregnant substitute teacher,
              found herself in a storm of popups and didn’t have any idea as to
              what was going on, or how to fix the situation.

              ...

              In March 2008 a $2,400 ad appeared in the Hartford Courant which
              was signed by 28 computer science professors arguing that Ms.
              Amero could not have controlled the pornographic pop-ups. Trial
              Detective Mark Lounsbury never checked for the presence of
              malware.

              A number of computer security experts, led by software developer
              and blogger, Alex Eckelberry noticed serious technical errors were
              made throughout her trial. Mr. Eckelberry brought together a group of
              forensic investigators who volunteered to analyze the computer hard
              drive she was using in the classroom that day and published a report
              on their findings.

              The group's report ultimately caused Julie's conviction to be
              overturned. Judge Hillary Strackbein overturned the unjust verdict in
              2007 and ordered a new trial because of erroneous and false
              information given during the initial trial. [JAM]


There are those who will say: “You're insane! That only happens on television or in works of
fiction” and to those people I would like to make reference to “GETTING EVEN The Truth About
Workplace Revenge—And How to Stop It.” [ISBN 978-0-470-33967-1]. Here is what I am trying to
convey: Information, when used as a means of evidence, should not be relied upon as many in
the legal arena are quick to call a “silver bullet” or homerun. Forensics, from a physical
perspective, is simpler to prove than that of electronic forensics. While not always fullproof,
someone attempting to spoof (mimic) someone else's fingerprint along with, say a hair follicle, is
outright far-fetched. While in certain scenarios, that too can be pondered. Who is to say that your
insane neighbor who hates you, did not go into your barbershop after your haircut and offer to
sweep the floor (as strange as that may sound) in order to get some of your hairs to leave at the
scene of a crime? Far-fetched, sure, but still achievable and while not that plausible, it should not
be discounted.

Electronic forensics is an altogether different ballgame from the normal forensic sciences though.
Unlike normal criminal forensics, there is plenty of physical evidence which is much more difficult
to “plant.” We know that in recent years there has been a surge in “anti-forensics” applications
and methods which make the “barbershop” hair scenario a lot simpler to achieve on the
technological scope and it can accomplish this in a much more covert fashion. This is what
makes the reliance on digital forensics such a danger. We are not looking at physical evidence;
rather we are looking at electronic bits (ones and zeros), which can easily be manipulated. Not
only can this evidence be manipulated maliciously, but also inadvertently by investigators or
anyone else involved with the electronics.

Let us look at the six phases of the digital forensics process [12]: Identification, Preservation,
Collection, Examination, Analysis and Presentation. We can closely examine the danger in
relying on all or any of these from an investigative standpoint. Here are some of the concerns I
would have while doing an incident response, with an explanation associated for each concern:

1) Identification – In order for an investigation to take place, an attack would have to have been
   identified. It is extremely doubtful that someone stumbled upon an active intrusion or incident.
   We can rightfully say that some mechanism of logging or an event trigger took place and
   alerted us to an anomaly, which is now called event_X. With all of the different programs and
   events on a network, the odds of someone watching log information in real-time and seeing an
   active attack, would be the equivalent of a needle-in-a-haystack search. Only in the case of
   electronics, this haystack would stretch for miles.

Log validity is not always a clearcut approach to validating the identity of an individual. All we can
verify is, “this came from machine_A's IP address”. While we can correlate an IP address to a
MAC, the MAC to an IP, then to a machine, followed by a port on the switch, we cannot rightfully
state that John Smith sat at his computer and caused event_X. We do know factually that we
have a log entry of event_X that was triggered by the IP address that matches the machine that
John Smith uses. Our first approach to skepticism comes from the fact that IP addresses are
easily spoofed. There is no definitive mechanism to halt this. We can minimize the spoofing of
addresses with, e.g., BCP filtering, VLAN's, port security and a variety of other corrective
measures, but we cannot rid a network of this definitively. At least, I've not found any product or
person who will come flat out and make this statement knowing it can be disputed.

With identification, going a step further, let us suppose that we have a Network Access Control
with strategically placed cameras pointing at John Smith's desk, and with timestamps that will
assist us so that we can point out factually that John Smith sat at his desk at the exact time that
entry_X occurred. Despite that, we still cannot prove positively that John Smith is the culprit.
Because of the malware, virus, trojan and altogether “badware” factor, there is the high possibility
that something outside of John Smith's actions was responsible for triggering event_X on his
machine.

So we place anti-virus, anti-malware and other types of software on his machine to further
minimize these instances. We know factually that John Smith's machine is at IP address
10.10.10.2 because our switches tell us so. We know factually that his machine's MAC address
is 00:00:de:ad:ca:fe because his physical hardware tells us so. The cameras tell us he was at his
desk all day; his anti-virus and all other installed software have green lights (meaning that they've
been updated). So what?

Here are the counterpoints:
    1a) MAC addresses can easily be changed [13], yet we can go on and state that “he didn't
      have administrative privileges.” We can also dispute this.
   1b) Anti*anything* software relies on updates based on “known threats.” In the case of a
       customized or newly created trojan, unless that signature exists, it will not be detected.

      1c) As a forensics investigator, we would need to know how the router and firewall handles
       traffic—e.g., their filtering (spoofing). Without doing so, anyone can dispute the validity of
       IP.

      1d) Because sysloggers are relying on IP, we need to go back to 1c: IP is unreliable.

2) Preservation – Have we taken the necessary steps to ensure that the chain of command was
   followed? “James Acme discovered an attack - entry_X. Upon further investigation...”
   Investigation by whom—James Acme? Does he understand forensics? Does he have the
   authority and/or knowledge to complete incident response? What was his visibility when doing
   forensics (meaning: Who else witnessed the initial response? How was the evidence handled
   and preserved?)

I would like to point out the following comment in an article which I read and commented on
pertaining to mobile forensics[13]: “I recommend making a working copy and an archive copy.
Now reseal and store your exhibit.” My inference after reading that author's commentary was: A
disk storage device of sorts was obtained, forensics were started, at a later time, a
recommendation was made. If you take the time to read the entire article, one can instantly
interpret the investigator as starting his incident response prior to making a copy of the evidence.
“By the way,... you should make a working copy and an archive copy...” One of the first steps,
prior to even plugging in any software or hardware, be it EnCase, FTK, or Oxygen, is to make a
bit-by-bit copy. This is done way after the picture taking, network mapping and all of the previous
steps: Otherwise, you've just tampered with the data and possibly cost someone a trip to the
slammer, when they may be innocent!

3) Collection – I associate “Preservation” with this step. I would also emphasize and place
   “Preservation” again immediately after “Collection.” After the collection of the data, we need to
   make sure we perform our due diligence and preserve what we have collected. There should
   never be a moment when you say: “Oh, did I make a copy?” or “Gee, now where'd that copy
   go?...” The statement: “I recommend making a working copy” is horrific. A better
   understanding is - making a copy is not a recommendation, but rather an outright mandate!
   As a forensics investigator, if I were on the stand answering questions against this expert
   witness (linked in the article), his comment, approach and methods would bring questions into
   the validity of his evidence not to mention the verification of his evidence. He did not seem to
   follow standards and procedures. Why should someone's life be placed in the hands of an
   expert like this?

I would question the methods of collection of the evidence; i.e., how was it accomplished? Was
the machine “unplugged” or powered down? Was a live forensics CD used, did the incident
response handler have the proper write blocker in place in the bit by bit copy? Who was the
responder? Was he or she qualified, and if so, by whom? There are plenty of questions to ask
here. Collection methods can be meticulously debunked, especially if they are not properly
documented from square one.

4) Examination – Because so many anti-forensics based tools have made headway in such a
   fashion that 'anyone without forensics experience can use them,' how trustworthy is the data
   that we are investigating and analyzing anyway? Did the forensic examiner take all know
   snapshots and entry points to this machine being analyzed? By this I mean physical images
   and diagrams of the connectivity to this machine and to its location. E.g., in a cubicle
environment, who else may have access to this machine? Was there a clear desk policy?
   Were there passwords lingering under the keyboard? Were the firewall rules, router
   configurations, NAC and anti-virus configurations viewed in order to diagram how information
   flowed into this machine? Were state sessions taken from the firewall or router to show who
   or what was connected to the machine upon the time of incident response. This is the best
   mechanism to validate the trustworthiness of the data we are obtaining, we're validating that
   there was no outside tampering with any potential evidence on this machine and verifying it
   prior to moving anything. We have to remember that - one small compromise on another
   machine in that network - and the “forensic” wall comes crumbling down. There is plenty to
   ask here.

   5) Analysis – What should we look for outside of the obvious (evidence of what caused
      event_X)? I would dissect our forensics copy and look for any hints of malicious software
      on the machine (viruses, worms, trojans, rootkits). If an instance comes up, we would
      need to know if we can associate that as the cause of event_X on John Smith's machine or
      rule it out. Did we perform due diligence in ensuring we unpacked any discovered
      “packed” malware? Did we search for extreme anamolies: stegonography, crypto,
      rootkits? Any one of these could exist. In searching for these instances, we: a) remove
      the potential of someone disputing our findings and punching holes in our evidence; and b)
      can show factually that information was not subject to tampering from an outside source.

   6) Presentation – I, as an expert, should be sure beyond a shadow of a doubt that event_X
      was caused by this machine and not John Smith. As a forensics examiner, my role is not
      to prove who did what, but rather to describe in detail what occurred, how it occurred, and
      how I discovered its occurrence. It would be too simple to solely connect the dots—MAC
      to IP to Desk plus Camera Image equals John Smith. The reality however is, it is not that
      simple. Especially not with technology, and even more so when a life is on the line.

So is it safe to say that electronic evidence needs some revamping? Sure, but the questions are
how can we do it and what can we do about certain issues. How and what can we do to prevent
problematic questions from catching us off-guard? We have known all along as security
professionals that we can place as many detective, corrective, preventive and other controls in
place and yet we could still fail miserably. Electronic Forensics, in its short form, has both many
pros and many cons. But when someone has their life on the line — possibly due to “digital
framing” — we have to work harder to make sure we do not make mistakes. Especially little ones,
like: “Oh gosh...maybe I should back this up instead of running EnCase on a live disk. Know
what, I recommend backing it up...” [3]




[1] http://en.wikipedia.org/wiki/Non-repudiation
[2] http://www.pgpi.org/doc/pgpintro/
[3] http://www.ffiec.gov/pdf/authentication_guidance.pdf
[4] http://en.wikipedia.org/wiki/RSA
[5] http://msdn.microsoft.com/en-us/library/bb250473(VS.85).aspx
[6] http://en.wikipedia.org/wiki/Joe_job
[7] http://en.wikipedia.org/wiki/False_flag
[8] Handbook of Research on Information Security and Assurance Chapter XI (ISBN
978-1-59904-855-0)
[9] http://www.stat.ucla.edu/~cocteau/dimacs/bush.pdf
[10] http://isc.sans.org/diary.html?storyid=4207
[11] http://lists.jammed.com/ISN/1999/02/0007.html
[12] http://en.wikipedia.org/wiki/Reasonable_doubt
[13] http://www.technitium.com/tmac/index.html
[14] http://mobileforensics.wordpress.com/bio/
[JAM] http://www.huffingtonpost.com/kim-mance/teachers-pop-up-porn-nigh_b_145772.html
[EFNR] http://www.e-fensive.net/non-repudiation.jpg
[EFNS] http://www.e-fensive.net/repudiation.jpg


Special thanks to: David Litchfield, for sending me more information to read; Joel Gridley, for
reminding me that I am officially a dinosaur, as well as sending me banking regulatory information;
Larry Greenblatt, for being one of the most down-to-earth security professionals I have ever met;
Chris Nickerson and others, for tuning in and promoting the Exotic Liability podcast (NOTE: Keep
thinking outside those boxes!); Dr. Anton Chuvakin, for his input which led to additional
clarification on my stance concerning this topic (repudiation versus non-repudiation); Susan
Cascio, for spending so much time reading my documents and for forcing me to relearn horrible
'engrish' [sic] and grammar. Ron Herrmann for assisting me in re-writing portions in an
understandable fashion and especially, to my wife for tolerating me.

More Related Content

What's hot

Social media competence and ethics Oklahoma
Social media competence and ethics OklahomaSocial media competence and ethics Oklahoma
Social media competence and ethics OklahomaJon Sutten
 
OSDC 2014: Michael Renner - Secure encryption in a wiretapped future
OSDC 2014: Michael Renner - Secure encryption in a wiretapped futureOSDC 2014: Michael Renner - Secure encryption in a wiretapped future
OSDC 2014: Michael Renner - Secure encryption in a wiretapped futureNETWAYS
 
Privacy is a UX problem (David Dahl)
Privacy is a UX problem (David Dahl)Privacy is a UX problem (David Dahl)
Privacy is a UX problem (David Dahl)Future Insights
 
Network securities cn
Network securities cnNetwork securities cn
Network securities cnDhaval Bhatia
 
hacking into computer systems - a beginners guid
hacking into computer systems - a beginners guidhacking into computer systems - a beginners guid
hacking into computer systems - a beginners guidChandra Pr. Singh
 
Cyber Attribution
Cyber AttributionCyber Attribution
Cyber AttributionData Source
 
CAN BLOCKCHAIN PRIVATE KEY BE HACKED?
CAN BLOCKCHAIN PRIVATE KEY BE HACKED?CAN BLOCKCHAIN PRIVATE KEY BE HACKED?
CAN BLOCKCHAIN PRIVATE KEY BE HACKED?Blockchain Council
 
Introduction to Cryptography and the Public Key Infrastructure
Introduction to Cryptography and the Public Key InfrastructureIntroduction to Cryptography and the Public Key Infrastructure
Introduction to Cryptography and the Public Key InfrastructureMike Gates
 
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...Codemotion
 
Devnexus 2017 Cybercrime and the Developer: How do you make a difference?
Devnexus 2017 Cybercrime and the Developer: How do you make a difference?Devnexus 2017 Cybercrime and the Developer: How do you make a difference?
Devnexus 2017 Cybercrime and the Developer: How do you make a difference?Steve Poole
 

What's hot (14)

Social media competence and ethics Oklahoma
Social media competence and ethics OklahomaSocial media competence and ethics Oklahoma
Social media competence and ethics Oklahoma
 
OSDC 2014: Michael Renner - Secure encryption in a wiretapped future
OSDC 2014: Michael Renner - Secure encryption in a wiretapped futureOSDC 2014: Michael Renner - Secure encryption in a wiretapped future
OSDC 2014: Michael Renner - Secure encryption in a wiretapped future
 
Privacy is a UX problem (David Dahl)
Privacy is a UX problem (David Dahl)Privacy is a UX problem (David Dahl)
Privacy is a UX problem (David Dahl)
 
Network securities cn
Network securities cnNetwork securities cn
Network securities cn
 
hacking into computer systems - a beginners guid
hacking into computer systems - a beginners guidhacking into computer systems - a beginners guid
hacking into computer systems - a beginners guid
 
Cyber Attribution
Cyber AttributionCyber Attribution
Cyber Attribution
 
CAN BLOCKCHAIN PRIVATE KEY BE HACKED?
CAN BLOCKCHAIN PRIVATE KEY BE HACKED?CAN BLOCKCHAIN PRIVATE KEY BE HACKED?
CAN BLOCKCHAIN PRIVATE KEY BE HACKED?
 
Introduction to Cryptography and the Public Key Infrastructure
Introduction to Cryptography and the Public Key InfrastructureIntroduction to Cryptography and the Public Key Infrastructure
Introduction to Cryptography and the Public Key Infrastructure
 
F16 cs61 cryptography
F16 cs61   cryptographyF16 cs61   cryptography
F16 cs61 cryptography
 
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...
 
nghe thuat lua dao
nghe thuat lua daonghe thuat lua dao
nghe thuat lua dao
 
Kadai1
Kadai1Kadai1
Kadai1
 
Devnexus 2017 Cybercrime and the Developer: How do you make a difference?
Devnexus 2017 Cybercrime and the Developer: How do you make a difference?Devnexus 2017 Cybercrime and the Developer: How do you make a difference?
Devnexus 2017 Cybercrime and the Developer: How do you make a difference?
 
Marco Casassa Mont: Pki overview
Marco Casassa Mont: Pki overviewMarco Casassa Mont: Pki overview
Marco Casassa Mont: Pki overview
 

Viewers also liked

Susan Hathcock 2008
Susan Hathcock 2008Susan Hathcock 2008
Susan Hathcock 2008guest0cfea9
 
Defending The Castle Rwsp
Defending The Castle RwspDefending The Castle Rwsp
Defending The Castle Rwspjmoquendo
 
Bedrijfspresentatie O&i
Bedrijfspresentatie O&iBedrijfspresentatie O&i
Bedrijfspresentatie O&icorstiaan
 
Kids Trends in India 2009 - A Preview
Kids Trends in India 2009 - A PreviewKids Trends in India 2009 - A Preview
Kids Trends in India 2009 - A Previewinsight instore
 
Poezie Gabriel Matrana 2008 2009.
Poezie   Gabriel Matrana 2008 2009.Poezie   Gabriel Matrana 2008 2009.
Poezie Gabriel Matrana 2008 2009.Gabriel Matrana
 
Shopper Marketing - The Great Marketing Shift
Shopper Marketing - The Great Marketing ShiftShopper Marketing - The Great Marketing Shift
Shopper Marketing - The Great Marketing Shiftinsight instore
 
Interior Design Trends in India - A Preview
Interior Design Trends in India - A PreviewInterior Design Trends in India - A Preview
Interior Design Trends in India - A Previewinsight instore
 
The Indian Fashion Scenario - Triggers, Targets, Trends & Tips-Preview
The Indian Fashion Scenario - Triggers, Targets, Trends & Tips-PreviewThe Indian Fashion Scenario - Triggers, Targets, Trends & Tips-Preview
The Indian Fashion Scenario - Triggers, Targets, Trends & Tips-Previewinsight instore
 

Viewers also liked (13)

Zagreb
ZagrebZagreb
Zagreb
 
Susan Hathcock 2008
Susan Hathcock 2008Susan Hathcock 2008
Susan Hathcock 2008
 
Defending The Castle Rwsp
Defending The Castle RwspDefending The Castle Rwsp
Defending The Castle Rwsp
 
Desaprender y aprender
Desaprender y aprenderDesaprender y aprender
Desaprender y aprender
 
Rare birds collection
Rare birds collectionRare birds collection
Rare birds collection
 
Bedrijfspresentatie O&i
Bedrijfspresentatie O&iBedrijfspresentatie O&i
Bedrijfspresentatie O&i
 
O bacalhoeiro
O bacalhoeiroO bacalhoeiro
O bacalhoeiro
 
About Insight Instore
About Insight InstoreAbout Insight Instore
About Insight Instore
 
Kids Trends in India 2009 - A Preview
Kids Trends in India 2009 - A PreviewKids Trends in India 2009 - A Preview
Kids Trends in India 2009 - A Preview
 
Poezie Gabriel Matrana 2008 2009.
Poezie   Gabriel Matrana 2008 2009.Poezie   Gabriel Matrana 2008 2009.
Poezie Gabriel Matrana 2008 2009.
 
Shopper Marketing - The Great Marketing Shift
Shopper Marketing - The Great Marketing ShiftShopper Marketing - The Great Marketing Shift
Shopper Marketing - The Great Marketing Shift
 
Interior Design Trends in India - A Preview
Interior Design Trends in India - A PreviewInterior Design Trends in India - A Preview
Interior Design Trends in India - A Preview
 
The Indian Fashion Scenario - Triggers, Targets, Trends & Tips-Preview
The Indian Fashion Scenario - Triggers, Targets, Trends & Tips-PreviewThe Indian Fashion Scenario - Triggers, Targets, Trends & Tips-Preview
The Indian Fashion Scenario - Triggers, Targets, Trends & Tips-Preview
 

Similar to I Did Not Write This Document And Can Prove It!

MindingTheCloud_NPR_Sum2014-no cover
MindingTheCloud_NPR_Sum2014-no coverMindingTheCloud_NPR_Sum2014-no cover
MindingTheCloud_NPR_Sum2014-no coverPJStarr
 
National Life IT Department's Cyber Security Awareness Presentation
National Life IT Department's Cyber Security Awareness PresentationNational Life IT Department's Cyber Security Awareness Presentation
National Life IT Department's Cyber Security Awareness PresentationJamie Proctor-Brassard
 
Five habits that might be a cyber security risk
Five habits that might be a cyber security riskFive habits that might be a cyber security risk
Five habits that might be a cyber security riskK. A. M Lutfullah
 
Cryptograpy Exam
Cryptograpy ExamCryptograpy Exam
Cryptograpy ExamLisa Olive
 
ccs12-18022310494mghmgmyy3 (1).pdf
ccs12-18022310494mghmgmyy3 (1).pdfccs12-18022310494mghmgmyy3 (1).pdf
ccs12-18022310494mghmgmyy3 (1).pdfKALPITKALPIT1
 
Cyber Crime and Security
Cyber Crime and SecurityCyber Crime and Security
Cyber Crime and SecurityMd Nishad
 
Cybercrime
CybercrimeCybercrime
Cybercrimepromit
 
POST-QUANTUM CRYPTOGRAPHY
POST-QUANTUM CRYPTOGRAPHYPOST-QUANTUM CRYPTOGRAPHY
POST-QUANTUM CRYPTOGRAPHYPavithra Muthu
 
Iaetsd network security and
Iaetsd network security andIaetsd network security and
Iaetsd network security andIaetsd Iaetsd
 
Computer Security for Lawyers
Computer Security for LawyersComputer Security for Lawyers
Computer Security for LawyersMark Lanterman
 
9 Trends in Identity Verification (2023) by Regula
9 Trends in Identity Verification (2023) by Regula9 Trends in Identity Verification (2023) by Regula
9 Trends in Identity Verification (2023) by RegulaRegula
 
Effects of using IT
Effects of using ITEffects of using IT
Effects of using ITMirza Ćutuk
 
A Guide to Internet Security For Businesses- Business.com
A Guide to Internet Security For Businesses- Business.comA Guide to Internet Security For Businesses- Business.com
A Guide to Internet Security For Businesses- Business.comBusiness.com
 
Cybercrime Research Paper
Cybercrime Research PaperCybercrime Research Paper
Cybercrime Research PaperWhitney Bolton
 

Similar to I Did Not Write This Document And Can Prove It! (20)

E commerce-securityy
E commerce-securityyE commerce-securityy
E commerce-securityy
 
Ds over
Ds overDs over
Ds over
 
Week12
Week12Week12
Week12
 
MindingTheCloud_NPR_Sum2014-no cover
MindingTheCloud_NPR_Sum2014-no coverMindingTheCloud_NPR_Sum2014-no cover
MindingTheCloud_NPR_Sum2014-no cover
 
National Life IT Department's Cyber Security Awareness Presentation
National Life IT Department's Cyber Security Awareness PresentationNational Life IT Department's Cyber Security Awareness Presentation
National Life IT Department's Cyber Security Awareness Presentation
 
Five habits that might be a cyber security risk
Five habits that might be a cyber security riskFive habits that might be a cyber security risk
Five habits that might be a cyber security risk
 
Cryptograpy Exam
Cryptograpy ExamCryptograpy Exam
Cryptograpy Exam
 
Social Engineering CSO Survival Guide
Social Engineering CSO Survival GuideSocial Engineering CSO Survival Guide
Social Engineering CSO Survival Guide
 
ccs12-18022310494mghmgmyy3 (1).pdf
ccs12-18022310494mghmgmyy3 (1).pdfccs12-18022310494mghmgmyy3 (1).pdf
ccs12-18022310494mghmgmyy3 (1).pdf
 
D.Silpa
D.SilpaD.Silpa
D.Silpa
 
Cyber Crime and Security
Cyber Crime and SecurityCyber Crime and Security
Cyber Crime and Security
 
Cybercrime
CybercrimeCybercrime
Cybercrime
 
POST-QUANTUM CRYPTOGRAPHY
POST-QUANTUM CRYPTOGRAPHYPOST-QUANTUM CRYPTOGRAPHY
POST-QUANTUM CRYPTOGRAPHY
 
Iaetsd network security and
Iaetsd network security andIaetsd network security and
Iaetsd network security and
 
Computer Security for Lawyers
Computer Security for LawyersComputer Security for Lawyers
Computer Security for Lawyers
 
9 Trends in Identity Verification (2023) by Regula
9 Trends in Identity Verification (2023) by Regula9 Trends in Identity Verification (2023) by Regula
9 Trends in Identity Verification (2023) by Regula
 
Security Primer
Security PrimerSecurity Primer
Security Primer
 
Effects of using IT
Effects of using ITEffects of using IT
Effects of using IT
 
A Guide to Internet Security For Businesses- Business.com
A Guide to Internet Security For Businesses- Business.comA Guide to Internet Security For Businesses- Business.com
A Guide to Internet Security For Businesses- Business.com
 
Cybercrime Research Paper
Cybercrime Research PaperCybercrime Research Paper
Cybercrime Research Paper
 

Recently uploaded

Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Alison B. Lowndes
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...Product School
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...Product School
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIES VE
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesThousandEyes
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsPaul Groth
 
IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoTAnalytics
 
In-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsIn-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsExpeed Software
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...Product School
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutesconfluent
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backElena Simperl
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...Product School
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaRTTS
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Product School
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka DoktorováCzechDreamin
 
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...CzechDreamin
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Julian Hyde
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxAbida Shariff
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
 

Recently uploaded (20)

Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and Planning
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024
 
In-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsIn-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT Professionals
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutes
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
 
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
Behind the Scenes From the Manager's Chair: Decoding the Secrets of Successfu...
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 

I Did Not Write This Document And Can Prove It!

  • 1. I Did Not Write This Document And Can Prove It! - A subjective view on why non-repudation is non- existent J. Oquendo C|EH, CNDA, CHFI, OSCP SGFA, SGFE, CSDP* joquendo@e-fensive.net Companies, lawyers, lawmakers, law enforcement agents, risk managers, security experts and countless others have used the term “non-repudiation” for some time. Many would believe it to be the “Holy Grail”, as proof positive that someone sent something and this fact cannot be disputed. Rather than re-invent any (word) wheels, a little wikipedia snippet is in order: Non-repudiation is the concept of ensuring that a party in a dispute cannot repudiate, or refute the validity of sending information – be it an electronic statement or signature. Although this concept can be applied to any transmission, including television and radio, by far the most common in the computing realm, is the verification and trust of signatures. According to traditional legal practice, a signature on a paper contract or memorandum may always be repudiated by the signatory. Such repudiation may take one of two forms: The signatory may claim fraud or forgery, such as "I did not sign that." Alternately, he/she may accept the signature as authentic but dispute its validity due to coercion, as in the scenario of blackmail or confessions given under torture. The legal burden of proof differs depending upon the repudiation reason. In the former scenario the burden of proof typically rests on the party claiming validity, while in the latter it shifts to the signatory claiming lack thereof. [1] With technology being what it is, non-repudiation based disputes should be well thought through from an argumentative standpoint. This statement mainly applies to those who depend on the term “non-repudiation” for legal purposes, as I cannot see one losing sleep over their co-worker, family member or someone who is not going to disaffect one's financial posture, taking a matter of “non-repudiation” to court. However, what does refuting “non-repudiation” mean on the legal front? How can one dispute the so called “indisputable”. Facts being what they are, one can dispute “non-repudiation rather easily. From the technology perspective, we know factually that attacks have risen over the years. We can see that new attack vectors gain ground almost daily. Viruses, worms and all sorts of malicious programs make their way into the most hardened systems. Everyone is vulnerable, many are compromised. With this fact in mind, we can look at the most extreme form of “non- repudiation” based communications and dispute this concept of “non-repudiation as well. In fact outright repudiate most electronic forms of communication at will, with denial being either/or: willfully or unknowingly. With that said, I will introduce (or re-introduce) you to PGP based messaging, for which FFIEC states: A major benefit of public key cryptography is that it provides a method for employing digital signatures. Digital signatures enable the recipient of information to verify the authenticity of the information's origin, and also verify that the information is intact. Thus, public key digital signatures provide authentication and data integrity. A digital signature also provides non-repudiation, which means that it prevents the sender from claiming that he or she did not actually send the information. These features are every bit as fundamental to cryptography as privacy, if not more.
  • 2. A digital signature serves the same purpose as a handwritten signature. However, a handwritten signature is easy to counterfeit. A digital signature is superior to a handwritten signature in that it is nearly impossible to counterfeit, plus it attests to the contents of the information as well as to the identity of the signer. The fundamental flaw of that statement by FFIEC is that - an attacker need not counterfeit a signature to repudiate sending a message. Why should they counterfeit a signature when they can validly sign a message as a sender? Surely that would be irrefutable proof that a sender did indeed send a message. Even if the key was created during a PGP key-signing party there is little to validate the authenticity of the sender. "The agencies consider single-factor authentication, as the only control mechanism, to be inadequate..." From the FFIEC "Authentication In an Internet Banking Environment" [3] A visual would look as follows [EFNR]. In this instance, a fictitious character named John Smith created a PGP key in front of hundreds of users. He created a 4096 bit key so another fact based snippet is in order: As of 2008, the largest (known) number factored by a general-purpose factoring algorithm was 663 bits long (see RSA-200), using a state-of-the-art distributed implementation. The next record is probably going to be a 768 bits modulus. RSA keys are typically 1024–2048 bits long. Some experts believe that 1024-bit keys may become breakable in the near term (though this is disputed); few see any way that 4096-bit keys could be broken in the foreseeable future. [4] We can allude to John Smith being the sole sender of any messages signed and sent by him using the key he created – theoretically. Because many individuals prefer simplicity over complexity, certain attacks – if I can even call them that – come to mind where this concept of non-repudiation is mentioned. No matter what the key size is or security protecting information is, it all gets thrown out the door when it comes to technology. In most businesses, high powered executives have assistants under them. Think about this for a moment. What happens when an executive who almost always is mandated to digitally “sign” business based emails prior to sending shares his PGP keys? Most executives will likely have their assistants sign on their behalf, and they will not know of the dangers involved with giving out their password. In these businesses, it is very likely that executives have literally given away their fingerprint in the form of passing to their assistants the pass phrase to their PGP keys: “Oh, you need my fingerprint to send that email. Before you send it, might as well write this down so I won't have to remind you. The pass phrase is MyFavoriteSportsTeam. Make sure you put it somewhere you will always find it. Under your mouse pad, no one would ever stop to look there.” If you think it doesn't happen, apparently you haven't work in many large environments. Secondly, there is caching. For those unfamiliar: “a cache is a temporary storage area where frequently accessed data can be stored for rapid access.” Many PGP-like programs have options available for caching. They do so for ease of use. “Remember my password for n amount of minutes so I won't have to re-type it”. Much has been written on cache attacking, timing attacks (TOCTTOU), so there is no need for a barrage of snippet explanations on these attacks. These attack vectors exist whether one realizes it or not. Thirdly, copy and pasting: We've already seen the insanity of web pages being able to access what is on the clipboard–you know, the place where things get stored when you copy them. [5] Finally, there are malicious programs, many of which are known to upload keystroke loggers to a machine. In light of this fact, we observe the following:
  • 3. In the pictured instance [EFNS], from the outside perspective, John definitively sent a threatening e-mail. It's obvious as hundreds of witnesses were present when he created a PGP key. In fact, hundreds signed his PGP key so there is no way he could “not” have sent that message. Fact: There is no method to counterfeit a key. Fact: An attacker need not duplicate a key to forge an e- mail and sign it provided the attacker has access to the password. Facts being what they are, we can see the possibility of John Smith not even sending a threatening message - contrary to the pulpit pounding “guilty-as-charged” non-repudiation charges lobbed at him. So the question would then become: “Why would someone do such a thing” which leads to a brief introduction on a “Joe Job” [6], or in military terms, the false flag [7]: “False flag operations are covert operations conducted by governments, corporations, or other organizations, which are designed to deceive the public in such a way that the operations appear as though they are being carried out by other entities” The Internet can - and has proven at times to be - a wild west. A place where anything goes. We have read about “cyber-bullying” and “cyber-warfare,” and the list goes on. So “cyber-framing” should not be any different. In fact, it should be looked into rather than being passed off. Facts are facts, whether we accept them or not. This concept of refuting non-repudiation is not anything new. In fact, the concept was re-introduced to me while I sat in Dulles Virginia by a great instructor I had while taking CISM (Certified Information Security Manager) courses. (Thanks Larry) Captain Kirk voice: “Spock... Can it... be... that... you did not.... send... that...message?!” While I had previously written an article in the early part of this millenium called “Framing Private Ryan” that meticulously detailed the dangers involved with “cyber-framing” and electronic attack vectors, the purpose of the initial write-up was to stir up conversation about the validity of electronic data as a source of “evidence”. For matters known to me and to those that know me, I'm all for accountability - so much so now - more than I ever was. I know first-hand how one's life can be turned upside down by those who don't really understand technology, and the dangers associated with data being used as evidence. In dealing with explaining technology, technology is far more prone to tampering than many actually realize and it can be a trial in itself sorting out the truth. Hence, here it is eight years later and again I am re-iterating the need for people to re-visit this topic. In the years prior to “e-signatures” it would have been difficult to dispute the fact that someone signed a document or came into agreement on, say a contract, since there was a physical element involved. It was “your word against someone else’s.” With electronic information, however, it is not as clear cut. We can assume that John Smith sent a message; in fact, most evidence will tell us this. Yet if we dig deeper into the electronic forensics scope, we can prove factually that, while someone could not have possibly counterfeited a PGP key, we can still repudiate sending a message. There is little to argue on the point of breaking encryption however, since many are still taking the wrong approach from an argumentative standpoint – we know crypto cannot be broken - so it would be absurd to question the legitimacy of non-repudiation from a technological standpoint when the question is “can you break crypto?” Especially when the person on the stand is holding a PhD and is stating: “The cryptography goals are: privacy or confidentiality, data integrity, authentication and non-repudiation.” [8] Who are we to argue? It is one Subject Matter Expert versus another. But don’t believe everything you see, especially when it comes to electronically obtained information.
  • 4. Electronic information is and will continue to be insecure no matter what safeguards we choose to use in order to protect them from disclosure. I havve read countless books, request for comments, white papers and arguments on the subject yet everyone seems to point back to technology solving another technological problem. It is sort of difficult to fathom given that a computer can only output what one inputs. My personal favorite acronym for the moment is “CBVA,” which means Complexity Based Vulnerability Analysis. Sounds mystifying--wouldn’t you agree? But, when looking at that explanation closely, the end result will always bring us back to square one, there is little guarantee: One goal of our effort has been the development of complexity-based vulnerability analysis (CBVA), utilizing a complexity-based information assurance metric for vulnerability analysis. The metric proposed is based upon Kolmogorov Complexity. Computable estimates of this Kolmogorov Complexity have been indicated, as well as additional useful applications of Kolmogorov Complexity for communications in general. Unless vulnerabilities can be identified and measured, the information assurance of a system can never be properly designed or guaranteed. [9] Scenario: On trial is John Smith for sending PGP signed death threats to a politician. Prosecutor: Is say a 4096 bit PGP key vulnerable? Expert: (Inserting random fluff mathematics which will bore a jury) ... in short, it is impossible. Prosecutor: So you’re saying there is no method to counterfeit a PGP key--correct? Expert: That’s correct. Prosecutor: So it would be safe to say that any message that was signed with that key would have been sent by whomever was in possession of that key? Expert: That is correct Prosecutor: Can I repudiate this? Expert: Experts from all over the world have been trying to break these algorithms and have been unsuccessful, so it's doubtful. Prosecutor: So without reasonable doubt, you’re saying someone couldn’t have counterfeited this “fingerprint” if you will? Expert: This is correct, I’m absolutely certain. In this instance of questioning from a prosecutor to an expert witness, the defendant seems to be irrefutably guilty of sending the message. He obviously did not have his key counterfeited, therefore he must have sent that threatening message. Imagine, if you can, your life thrown into mayhem due to a situation like this. What would you do, who would you turn to, how would you defend yourself? How would you protect yourself? “It's your DNA at the crime scene!” The answers are not always clearcut, and sometimes there are no answers. Depending on technology to solve other technological problems is foolish. Yet professionals continue to fool themselves into believing the concept of non-repudiation, which brings us back to the statement in the first paragraph of this page: “Unless vulnerabilities can be identified and measured...” How can you offer a measurement in this scenario? Surely there would likely be more evidence involved with a case like this; however, for those who work in the security realm, you should be mindful of these situations. There are many reasons to keep a sharp eye out for all anomalies on
  • 5. one’s network--no matter how insignificant they can seem. After all, your fingerprint may depend on it. Anyone working in the forensics field should examine machines carefully for instances of malware, breaches, viruses and the likes. Getting back into the courtroom scenario, a counter-argument could work out as follows: Defense: In order to create and sign this message, a user solely needs to provide a password for sending a message - is this correct? Expert: This is correct. Defense: Can this password be guessed? Expert: Possibly, although one should use a strong password to avoid this. Defense: What if the password was compromised--could someone else have signed? Expert: Absolutely. Defense: So we can dispute non-repudiation, can’t we? Expert: I suppose you can. Defense: Are you aware of any trojans, malware, infections or anything similar in your years of experience that has say, stolen PGP keys? [10, 11] Expert: No I’m not aware. Prosecutor: Objection, we can stipulate there have been ... Defense: Defense rests. Theoretically, all that is needed is reasonable doubt. At this point it is created. The introduction of known threats, viruses, malware and the potential that this “could have” occurred should be enough even to absolve someone who is guilty. A dual edged sword, wouldn't you say? Yet as investigators, security professionals and forensic experts, we could have taken the due diligence to check for this prior to an encounter like this. Let us move more into the forensics arena here: The argument being either to prove or to disprove that an infection on John Smith's machine caused a password to be compromised, which led to the threat message being signed and sent. Reality being what it is, can we prove statistically, beyond a reasonable doubt, that infectious programming is on the rise? Can we correlate an infection to a password being compromised? As a forensic expert, have you performed your due diligence, have you checked for this? A thorough examination of a machine with an infection (malware, viruses, etc.) has the potential to either make or break a case, given what I've already presented. You do not necessarily need to break technology, especially one that cannot be broken and yet one can still circumvent it. So, for those who study and read about non-repudiation and associate this dreaded term as being as concrete as a fingerprint or DNA - “the silver bullet” - I say to you: “You might be thinking solely with your keyboards and not using your brains here!” Surely, from an offensive posture, arguing this case (for non-repudiation) would be a difficult one, and very time consuming. Yet, in the event that an argument like this sprouted, questions that I would immediately focus would be the attention paid to the state of the machine with regards to malware, viruses, etc. Anyone in the forensics field will tell you that it is a difficult task capturing registry states from a forensic standpoint. Even more so with malware, where it is becoming
  • 6. much more difficult to detect and decipher it. What is the next move? Where do we go from here? Should we really continue using the term “non-repudiation”? Four years ago, while pregnant, Ms. Amero went to work one day as a substitute teacher and left with felony charges against her. Her crime? Julie Amero was convicted of four felony counts, each count carrying a maximum of ten years, for exposing school children to pornography. The reality is that Julie, a 40 year–old, pregnant substitute teacher, found herself in a storm of popups and didn’t have any idea as to what was going on, or how to fix the situation. ... In March 2008 a $2,400 ad appeared in the Hartford Courant which was signed by 28 computer science professors arguing that Ms. Amero could not have controlled the pornographic pop-ups. Trial Detective Mark Lounsbury never checked for the presence of malware. A number of computer security experts, led by software developer and blogger, Alex Eckelberry noticed serious technical errors were made throughout her trial. Mr. Eckelberry brought together a group of forensic investigators who volunteered to analyze the computer hard drive she was using in the classroom that day and published a report on their findings. The group's report ultimately caused Julie's conviction to be overturned. Judge Hillary Strackbein overturned the unjust verdict in 2007 and ordered a new trial because of erroneous and false information given during the initial trial. [JAM] There are those who will say: “You're insane! That only happens on television or in works of fiction” and to those people I would like to make reference to “GETTING EVEN The Truth About Workplace Revenge—And How to Stop It.” [ISBN 978-0-470-33967-1]. Here is what I am trying to convey: Information, when used as a means of evidence, should not be relied upon as many in the legal arena are quick to call a “silver bullet” or homerun. Forensics, from a physical perspective, is simpler to prove than that of electronic forensics. While not always fullproof, someone attempting to spoof (mimic) someone else's fingerprint along with, say a hair follicle, is outright far-fetched. While in certain scenarios, that too can be pondered. Who is to say that your insane neighbor who hates you, did not go into your barbershop after your haircut and offer to sweep the floor (as strange as that may sound) in order to get some of your hairs to leave at the scene of a crime? Far-fetched, sure, but still achievable and while not that plausible, it should not be discounted. Electronic forensics is an altogether different ballgame from the normal forensic sciences though.
  • 7. Unlike normal criminal forensics, there is plenty of physical evidence which is much more difficult to “plant.” We know that in recent years there has been a surge in “anti-forensics” applications and methods which make the “barbershop” hair scenario a lot simpler to achieve on the technological scope and it can accomplish this in a much more covert fashion. This is what makes the reliance on digital forensics such a danger. We are not looking at physical evidence; rather we are looking at electronic bits (ones and zeros), which can easily be manipulated. Not only can this evidence be manipulated maliciously, but also inadvertently by investigators or anyone else involved with the electronics. Let us look at the six phases of the digital forensics process [12]: Identification, Preservation, Collection, Examination, Analysis and Presentation. We can closely examine the danger in relying on all or any of these from an investigative standpoint. Here are some of the concerns I would have while doing an incident response, with an explanation associated for each concern: 1) Identification – In order for an investigation to take place, an attack would have to have been identified. It is extremely doubtful that someone stumbled upon an active intrusion or incident. We can rightfully say that some mechanism of logging or an event trigger took place and alerted us to an anomaly, which is now called event_X. With all of the different programs and events on a network, the odds of someone watching log information in real-time and seeing an active attack, would be the equivalent of a needle-in-a-haystack search. Only in the case of electronics, this haystack would stretch for miles. Log validity is not always a clearcut approach to validating the identity of an individual. All we can verify is, “this came from machine_A's IP address”. While we can correlate an IP address to a MAC, the MAC to an IP, then to a machine, followed by a port on the switch, we cannot rightfully state that John Smith sat at his computer and caused event_X. We do know factually that we have a log entry of event_X that was triggered by the IP address that matches the machine that John Smith uses. Our first approach to skepticism comes from the fact that IP addresses are easily spoofed. There is no definitive mechanism to halt this. We can minimize the spoofing of addresses with, e.g., BCP filtering, VLAN's, port security and a variety of other corrective measures, but we cannot rid a network of this definitively. At least, I've not found any product or person who will come flat out and make this statement knowing it can be disputed. With identification, going a step further, let us suppose that we have a Network Access Control with strategically placed cameras pointing at John Smith's desk, and with timestamps that will assist us so that we can point out factually that John Smith sat at his desk at the exact time that entry_X occurred. Despite that, we still cannot prove positively that John Smith is the culprit. Because of the malware, virus, trojan and altogether “badware” factor, there is the high possibility that something outside of John Smith's actions was responsible for triggering event_X on his machine. So we place anti-virus, anti-malware and other types of software on his machine to further minimize these instances. We know factually that John Smith's machine is at IP address 10.10.10.2 because our switches tell us so. We know factually that his machine's MAC address is 00:00:de:ad:ca:fe because his physical hardware tells us so. The cameras tell us he was at his desk all day; his anti-virus and all other installed software have green lights (meaning that they've been updated). So what? Here are the counterpoints:  1a) MAC addresses can easily be changed [13], yet we can go on and state that “he didn't have administrative privileges.” We can also dispute this.
  • 8. 1b) Anti*anything* software relies on updates based on “known threats.” In the case of a customized or newly created trojan, unless that signature exists, it will not be detected.  1c) As a forensics investigator, we would need to know how the router and firewall handles traffic—e.g., their filtering (spoofing). Without doing so, anyone can dispute the validity of IP.  1d) Because sysloggers are relying on IP, we need to go back to 1c: IP is unreliable. 2) Preservation – Have we taken the necessary steps to ensure that the chain of command was followed? “James Acme discovered an attack - entry_X. Upon further investigation...” Investigation by whom—James Acme? Does he understand forensics? Does he have the authority and/or knowledge to complete incident response? What was his visibility when doing forensics (meaning: Who else witnessed the initial response? How was the evidence handled and preserved?) I would like to point out the following comment in an article which I read and commented on pertaining to mobile forensics[13]: “I recommend making a working copy and an archive copy. Now reseal and store your exhibit.” My inference after reading that author's commentary was: A disk storage device of sorts was obtained, forensics were started, at a later time, a recommendation was made. If you take the time to read the entire article, one can instantly interpret the investigator as starting his incident response prior to making a copy of the evidence. “By the way,... you should make a working copy and an archive copy...” One of the first steps, prior to even plugging in any software or hardware, be it EnCase, FTK, or Oxygen, is to make a bit-by-bit copy. This is done way after the picture taking, network mapping and all of the previous steps: Otherwise, you've just tampered with the data and possibly cost someone a trip to the slammer, when they may be innocent! 3) Collection – I associate “Preservation” with this step. I would also emphasize and place “Preservation” again immediately after “Collection.” After the collection of the data, we need to make sure we perform our due diligence and preserve what we have collected. There should never be a moment when you say: “Oh, did I make a copy?” or “Gee, now where'd that copy go?...” The statement: “I recommend making a working copy” is horrific. A better understanding is - making a copy is not a recommendation, but rather an outright mandate! As a forensics investigator, if I were on the stand answering questions against this expert witness (linked in the article), his comment, approach and methods would bring questions into the validity of his evidence not to mention the verification of his evidence. He did not seem to follow standards and procedures. Why should someone's life be placed in the hands of an expert like this? I would question the methods of collection of the evidence; i.e., how was it accomplished? Was the machine “unplugged” or powered down? Was a live forensics CD used, did the incident response handler have the proper write blocker in place in the bit by bit copy? Who was the responder? Was he or she qualified, and if so, by whom? There are plenty of questions to ask here. Collection methods can be meticulously debunked, especially if they are not properly documented from square one. 4) Examination – Because so many anti-forensics based tools have made headway in such a fashion that 'anyone without forensics experience can use them,' how trustworthy is the data that we are investigating and analyzing anyway? Did the forensic examiner take all know snapshots and entry points to this machine being analyzed? By this I mean physical images and diagrams of the connectivity to this machine and to its location. E.g., in a cubicle
  • 9. environment, who else may have access to this machine? Was there a clear desk policy? Were there passwords lingering under the keyboard? Were the firewall rules, router configurations, NAC and anti-virus configurations viewed in order to diagram how information flowed into this machine? Were state sessions taken from the firewall or router to show who or what was connected to the machine upon the time of incident response. This is the best mechanism to validate the trustworthiness of the data we are obtaining, we're validating that there was no outside tampering with any potential evidence on this machine and verifying it prior to moving anything. We have to remember that - one small compromise on another machine in that network - and the “forensic” wall comes crumbling down. There is plenty to ask here. 5) Analysis – What should we look for outside of the obvious (evidence of what caused event_X)? I would dissect our forensics copy and look for any hints of malicious software on the machine (viruses, worms, trojans, rootkits). If an instance comes up, we would need to know if we can associate that as the cause of event_X on John Smith's machine or rule it out. Did we perform due diligence in ensuring we unpacked any discovered “packed” malware? Did we search for extreme anamolies: stegonography, crypto, rootkits? Any one of these could exist. In searching for these instances, we: a) remove the potential of someone disputing our findings and punching holes in our evidence; and b) can show factually that information was not subject to tampering from an outside source. 6) Presentation – I, as an expert, should be sure beyond a shadow of a doubt that event_X was caused by this machine and not John Smith. As a forensics examiner, my role is not to prove who did what, but rather to describe in detail what occurred, how it occurred, and how I discovered its occurrence. It would be too simple to solely connect the dots—MAC to IP to Desk plus Camera Image equals John Smith. The reality however is, it is not that simple. Especially not with technology, and even more so when a life is on the line. So is it safe to say that electronic evidence needs some revamping? Sure, but the questions are how can we do it and what can we do about certain issues. How and what can we do to prevent problematic questions from catching us off-guard? We have known all along as security professionals that we can place as many detective, corrective, preventive and other controls in place and yet we could still fail miserably. Electronic Forensics, in its short form, has both many pros and many cons. But when someone has their life on the line — possibly due to “digital framing” — we have to work harder to make sure we do not make mistakes. Especially little ones, like: “Oh gosh...maybe I should back this up instead of running EnCase on a live disk. Know what, I recommend backing it up...” [3] [1] http://en.wikipedia.org/wiki/Non-repudiation [2] http://www.pgpi.org/doc/pgpintro/ [3] http://www.ffiec.gov/pdf/authentication_guidance.pdf [4] http://en.wikipedia.org/wiki/RSA [5] http://msdn.microsoft.com/en-us/library/bb250473(VS.85).aspx [6] http://en.wikipedia.org/wiki/Joe_job [7] http://en.wikipedia.org/wiki/False_flag [8] Handbook of Research on Information Security and Assurance Chapter XI (ISBN 978-1-59904-855-0) [9] http://www.stat.ucla.edu/~cocteau/dimacs/bush.pdf [10] http://isc.sans.org/diary.html?storyid=4207
  • 10. [11] http://lists.jammed.com/ISN/1999/02/0007.html [12] http://en.wikipedia.org/wiki/Reasonable_doubt [13] http://www.technitium.com/tmac/index.html [14] http://mobileforensics.wordpress.com/bio/ [JAM] http://www.huffingtonpost.com/kim-mance/teachers-pop-up-porn-nigh_b_145772.html [EFNR] http://www.e-fensive.net/non-repudiation.jpg [EFNS] http://www.e-fensive.net/repudiation.jpg Special thanks to: David Litchfield, for sending me more information to read; Joel Gridley, for reminding me that I am officially a dinosaur, as well as sending me banking regulatory information; Larry Greenblatt, for being one of the most down-to-earth security professionals I have ever met; Chris Nickerson and others, for tuning in and promoting the Exotic Liability podcast (NOTE: Keep thinking outside those boxes!); Dr. Anton Chuvakin, for his input which led to additional clarification on my stance concerning this topic (repudiation versus non-repudiation); Susan Cascio, for spending so much time reading my documents and for forcing me to relearn horrible 'engrish' [sic] and grammar. Ron Herrmann for assisting me in re-writing portions in an understandable fashion and especially, to my wife for tolerating me.