GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
A review of theoretical and practical challenges of trusted autonomy in big data
1. CONTACT: PRAVEEN KUMAR. L (,+91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
A REVIEW OF THEORETICAL AND PRACTICAL CHALLENGES OF TRUSTED
AUTONOMY IN BIG DATA
Abstract
Despite the advances made in artificial intelligence, software agents, and robotics,
there is little we see today that we can truly call a fully autonomous system. We
conjecture that the main inhibitor for advancing autonomy is lack of trust. Trusted
autonomy is the scientific and engineering field to establish the foundations and
ground work for developing trusted autonomous systems (robotics and software
agents) that can be used in our daily life, and can be integrated with humans
seamlessly, naturally and efficiently. In this paper, we review this literature to
reveal opportunities for researchers and practitioners to work on topics that can
create a leap forward in advancing the field of trusted autonomy. We focus the
paper on the ‘trust’ component as the uniting technology between humans and
machines. Our inquiry into this topic revolves around three sub-topics: (1)
reviewing and positioning the trust modelling literature for the purpose of trusted
autonomy; (2) reviewing a critical subset of sensor technologies that allow a
machine to sense human states; and (3) distilling some critical questions for
advancing the field of trusted autonomy. The inquiry is augmented with
conceptual models that we propose along the way by recompiling and reshaping
the literature into forms that enable trusted autonomous systems to become a
reality. The paper offers a vision for a Trusted Cyborg Swarm, an extension of our
2. CONTACT: PRAVEEN KUMAR. L (,+91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
previous Cognitive Cyber Symbiosis concept, whereby humans and machines
meld together in a harmonious, seamless, and coordinated manner.
EXISTING SYSTEM:
Finding technological solutions to address this question is a complex endeavor
that is compounded even more with the dynamic and possible chaotic nature of a
trusting relationship. The slightest change in context may cause a dramatic
change in trust estimation with a profound impact on trust dynamics. For
example, a slight change in the security level of the communication channel
between the human and the machine can cause a dramatic change in trust
estimation. To put it more succinctly, TAS require the machine to possess a high
level of intelligence extending beyond the level of the task it is performing to
include the level of the interaction it is involved with, the environment it is
‘situated’ and ‘embodied’ within, and to be aware of its own states, actions,
motives, goals, and action production mechanisms. TAS require smart
technologies for the machine to manage the interaction with humans; not just to
convince the human that the machine is trustworthy, but to be ‘self-aware’ of
trust to the extent of making its own decisions on the varying degree of
trustworthiness of the human operators/users themselves during the interaction
and based on context.
3. CONTACT: PRAVEEN KUMAR. L (,+91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
PROPOSED SYSTEM:
Designing the Artificial Intelligence (AI) models to manage the interaction
between the human and the machine in the way explained above requires from
us to address a set of fundamental aspects. The first aspect, which is an
immediate and implicit requirement of this view on TAS, is autonomy. The actors
involved in TAS, either human agents or machine agents, must be able to operate
autonomously in their environment, where the autonomy of an agent relies on
continuous sensing and awareness of the environment and other actors. Further,
sensing produces input data that need to be processed in order to make
decisions. These aspects, which define at individual level the agents involved in
TAS, are coupled by the concept of trust when the interaction between agents is
considered. Therefore, we believe it is pertinent to structure the discussion on
TAS in three main parts that obviously overlap to some extent, but in essence
focus on the three themes mentioned above: (1) the concept of trust, (2) sensing
and data, and (3) autonomy. Thus, the remainder of the paper is structured as
follows.
CONCLUSION
This paper offered an overview of the trusted autonomy field, with a particular
emphasis being placed on the role of big data and the challenges that big data
4. CONTACT: PRAVEEN KUMAR. L (,+91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
creates for trusted autonomous systems. A review of trust modelling was
presented. The review to sensors and processing, as two fundamental building
blocks that directly relate to big data for a trusted autonomous systems, was
limited to the class of sensors offering data on humans’ mental and behavioural
states. We compiled the literature into conceptual diagrams to offer models for
trust, the trust bus, multi-agent and autonomous systems architectures for
trusted autonomy. The paper was then concluded with six key big data challenges
for trusted autonomous systems.
REFERENCES
[1] M. R. Delgado, R. H. Frank, and E. A. Phelps, “Perceptions of moral character
modulate the neural systems of reward during the trust game,” Nature
neuroscience, vol. 8, no. 11, pp. 1611–1618, 2005.
[2] M. Deutsch, The resolution of conflict: Constructive and destructive
processes. YaleUniversity Press, 1977.
[3] “Cooperation and trust: Some theoretical notes.” in Proceedings of Nebraska
Symposium on Motivation, M. R. Jones, Ed. Univer. Nebraska Press, 1962. [4] N.
Luhmann, H. Davis, J. Raffan, and K. Rooney, Trust; and, Power: two works by
Niklas Luhmann. Wiley Chichester, 1979.
5. CONTACT: PRAVEEN KUMAR. L (,+91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
[5] C. Wells, “Trust, gender, and race in the workplace,” Journal of Social Behavior
and Personality, vol. 16, no. 1, pp. 115–126, 2001.
[6] D. S. Reina and M. L. Reina, “Trust and betrayal in the workplace building
effective relationships in your organization,” Advances in Developing Human
Resources, vol. 2, no. 1, pp. 121–121, 2000.
[7] E. M. Whitener, S. E. Brodt, M. A. Korsgaard, and J. M. Werner, “Managers as
initiators of trust: An exchange relationship framework for understanding
managerial trustworthy behavior,” Academy of management review, vol. 23, no.
3, pp. 513–530, 1998.
[8] M. D. Spector and G. E. Jones, “Trust in the workplace: Factors affecting trust
formation between team members,” The Journal of social psychology, vol. 144,
no. 3, pp. 311–321,2004.
[9] S. Marsh, “Formalising trust as a computational concept,” PhD Thesis, 1994.
[10] I. Pinyol and J. Sabater-Mir, “Computational trust and reputation models for
open multi-agent systems: a review,” Artificial Intelligence Review, vol. 40, pp. 1–
25, 2013.
[11] E. Petraki and H. Abbass, “On trust and influence: A computational red
teaming game theoretic perspective,” in Computational Intelligence for Security
6. CONTACT: PRAVEEN KUMAR. L (,+91 – 9791938249)
MAIL ID: sunsid1989@gmail.com, praveen@nexgenproject.com
Web: www.nexgenproject.com, www.finalyear-ieeeprojects.com
and Defense Applications (CISDA), 2014 Seventh IEEE Symposium on. IEEE, 2014,
pp. 1–7.
[12] H. A. Abbass, E. Petraki, K. Merrick, J. Harvey, and M. Barlow, “Trusted
autonomy and cognitive cyber symbiosis: Open challenges,” Cognitive
Computation, vol. 8, no. 3, pp. 385–408, 2016.
[13] H. Abbass, G. Greenwood, and E. Petraki, “The n-player trust game and its
replicator dynamics,” Evolutionary Computation, IEEETransactions on, To Appear.