Your SlideShare is downloading. ×

Oit socijalne mreze

7,045
views

Published on

i used a bunch of outside resources and i haven't stated any, i will do it in the future. this presentation is in serbian language and it is used by students of School of Electrical and Computer …

i used a bunch of outside resources and i haven't stated any, i will do it in the future. this presentation is in serbian language and it is used by students of School of Electrical and Computer Engineering of Applied Studies, Belgrade, Serbia

Published in: Education

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
7,045
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • A new study from Forrester proves that the majority of Americans are a bunch of lazy re-tweeters. 93% of online consumers in the emerging markets of China, India, Mexico and Brazil use social media tools at least once-a-month. U.S. and European consumers are far more likely to view social media as a spectator sport, joining it and then just watching it fly by.In the U.S., 68% of social media users are "joiners," which means they maintain a profile on a social networking site and visit social networks. 73% are "spectators," or users who mostly just read blogs, online forums, customer ratings/reviews and tweets, listen to podcasts and watch videos. This number is strikingly similar in Europe (EU-7 countries, to be specific), with 69% of users classified as spectators and 50% as joiners.Creators and ConversationalistsOnly 24% of U.S. users are content creators and 36% are conversationalists. Those numbers are quite similar in the EU, with 23% classified as "content creators" and 26% as "conversationalists."In Asia, these numbers look drastically different. Seventy-five percent of online adults in metropolitan China and India create content, which includes publishing blogs and web pages, uploading video and audio/music they made and posting articles or stories that they wrote.Japanese social media users do not follow the same patterns as Chinese and Indian social networkers. A mere 28% of Japanese users visit social networking sites at least once a month. Only 13% of online Japanese adults visit Facebook on a regular monthly basis. Instead, they prefer sites like mixi or Twitter, which fit their preference for online anonymity.Emerging Social Mobile Markets: China and AfricaAnother Forrester report proved that China and other Asia-Pacific countries led the pack in mobile adoption, including mobile social usage and work usage. They were also more likely to own multiple devices. This report showed that in metropolitan China, 33% accessed social networks via mobile, whereas only 25% of U.S. users and 11% of European users did the same. Forrester's report revealed that Chinese users accessed social sites the most, calling them "super connecteds."This study does not include social network usage in Africa, which is only second to China. Toward the end of last year, Facebook partnered with French cell operator Orange to bring inexpensive cellphones armed with Facebook to Africa and Europe.Facebook is available in 70 languages, and more than 75% of its users are located outside the U.S.
  • Nove Web tehnologije, kaoštosu Web 2.0, Web 3.0, socijalnemrežeimashupi, kaoistandardikojiihpodržavaju, bi trebalodaomogućerealizovanjeidejeWeba, kaouniverzalnogmedijumazarazmenupodataka, informacijaiznanja.Web 3.0 predstavljatrećugeneraciju World Wide Web-a, kojaobičnopretpostavljauključivanjesemantičkogtagovanjasadržajanaWebu. Trećageneracija Web-a trebalo bi daomogućirealizacijuSemantičkogWeba (Web 3.0).Semantički Web je Web podataka, čiji je krajnjiciljmogućnostdaračunariobavljajuvišekorisnogposlazakorisnikeWebairazvojsistemakojipodržavajusigurneinterakcijeprekomreže. Ovajterminodnosi se naviziju W3C-a o Webupovezanihpodataka. TehnologijeSemantičkog Web-a trebadaomogućeljudimadakreirajuskladištapodatakanaWebu, daizgrađujurečnikeipišupravilazarukovanjepodacima. One trebadaobezbedeokruženje u komećemoćida se pišuupitinadtimpodacima, kreirajuvezekorišćenjemrečnikaontologijaitd. (Linked Data, 2010). World Wide Web Consortium (W3C) je internacionalnazajednicakojarazvijastandardekojiosiguravajudugoročanrazvojWeba.data.gov.uk is a UK Government project to open up almost all non-personal data acquired for official purposes for free re-use. Sir Tim Berners-Lee and Professor Nigel Shadbolt are the two key figures behind the project.
  • WorldWideWeb (bez praznina) prvi je Web browser koji je napravio Tim Berners-Lee u European Particle Physics Laboratory (CERN) u Švajcarskoj. Kasnije Web browser dobija naziv Nexus, kako bi se napravila razlika između njega i globalnog hipertekst sistema (Raman, 2009), World Wide Web-a, čija se upotreba masovno rasprotranila (Tim Berners Lee: WorldWideWeb, the first Web browser). Danas se Web koristi kao alat za saradnju i komunikaciju ljudi, omogućavajući im bržu i lakšu razmenu različitih digitalnih sadržaja, ideja i znanja.
  • Meme – Internet meme je ideja koja se širi World Wide Web-om. Ideja može biti bilo koji oblik hiperlinka, video, slika, website ili samo reč ili fraza, kao namerno pogrešno pisanje neke reči. Meme se može širiti od osobe do osobe preko socijalne mreže, blog, direktno mailom, ili nekim drugom servisom zasnovanim na Webu.Primeri:http://knowyourmeme.com/memes/girl-with-a-funny-talent
  • In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[11] They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated Web 1.0 with the business models of Netscape and the Encyclopædia Britannica Online. For example,Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the "horseless carriage" framed the automobile as an extension of the familiar, Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.[12]In short, Netscape focused on creating software, updating it on occasion, and distributing it to the end users. O'Reilly contrasted this with Google, a company that did not at the time focus on producing software, such as a browser, but instead on providing a service based on data such as the links Web page authors make between sites. Google exploits this user-generated content to offer Web search based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "the perpetual beta". A similar difference can be seen between the Encyclopædia Britannica Online and Wikipedia: while the Britannica relies upon experts to create articles and releases them periodically in publications, Wikipedia relies on trust in anonymous users to constantly and quickly build content. Wikipedia is not based on expertise but rather an adaptation of the open source software adage "given enough eyeballs, all bugs are shallow", and it produces and updates articles constantly. O'Reilly's Web 2.0 conferences have been held every year since 2003, attracting entrepreneurs, large companies, and technology reporters.
  • Činjenica da se korisnici nalaze u centru pažnje, kada su u pitanju socijalne mreže dovela je do toga da je magazin Times 2006. godine za osobu godine izabrao obične ljude. Te godine, pored mnogih značajnih stvari koje su se dogodile, u centru pažnje našla se priča vezana za zajednicu ljudi i saradnju među njima u razmeri koja do tada nije postojala. World Wide Web omogućio je milionima ljudi da postave video sadržaj na YouTube ili da kreiraju online metropolis, MySpace. Nove tehnologije Web-a omogućavaju prikupljanje različitog sadržaja koji postavlja veliki broja ljudi, koji na ovaj način dobijaju na svom značaju (Grossman, 2006).
  • The Architecture of Participationby Tim O'ReillyJune 2004I've come to use the term "the architecture of participation" to describe the nature of systems that are designed for user contribution. Larry Lessig's book, Code and Other Laws of Cyberspace, which he characterizes as an extended meditation on Mitch Kapor's maxim, "architecture is politics", made the case that we need to pay attention to the architecture of systems if we want to understand their effects.I immediately thought of Kernighan and Pike's description of the Unix software tools philosophyreferred to above. I also recalled an unpublished portion of the interview we did with Linus Torvalds to create his essay for the 1998 book, Open Sources. Linus too expressed a sense that architecture may be more important than source code. "I couldn't do what I did with Linux for Windows, even if I had the source code. The architecture just wouldn't support it." Too much of the windows source code consists of interdependent, tightly coupled layers for a single developer to drop in a replacement module.And of course, the Internet and the World Wide Web have this participatory architecture in spades. As outlined above in the section on software commoditization, any system designed around communications protocols is intrinsically designed for participation. Anyone can create a participating, first-class component.In addition, the IETF, the Internet standards process, has a great many similarities with an open source software project. The only substantial difference is that the IETF's output is a standards document rather than a code module. Especially in the early years, anyone could participate, simply by joining a mailing list and having something to say, or by showing up to one of the three annual face-to-face meetings. Standards were decided by participating individuals, irrespective of their company affiliations. The very name for proposed Internet standards, RFCs (Request for Comments), reflects the participatory design of the Net. Though commercial participation was welcomed and encouraged, companies, like individuals, were expected to compete on the basis of their ideas and implementations, not their money or disproportional representation. The IETF approach is where open source and open standards meet.And while there are successful open source projects like Sendmail, which are largely the creation of a single individual, and have a monolithic architecture, those that have built large development communities have done so because they have a modular architecture that allows easy participation by independent or loosely coordinated developers. The use of Perl, for example, exploded along with CPAN, the Comprehensive Perl Archive Network, and Perl's module system, which allowed anyone to enhance the language with specialized functions, and make them available to other users.The web, however, took the idea of participation to a new level, because it opened that participation not just to software developers but to all users of the system.It has always baffled and disappointed me that the open source community has not claimed the web as one of its greatest success stories. If you asked most end users, they are most likely to associate the web with proprietary clients such as Microsoft's Internet Explorer than with the revolutionary open source architecture that made the web possible. That's a PR failure! Tim Berners-Lee's original web implementation was not just open source, it was public domain. NCSA's web server and Mosaic browser were not technically open source, but source was freely available. While the move of the NCSA team to Netscape sought to take key parts of the web infrastructure to the proprietary side, and the Microsoft-Netscape battles made it appear that the web was primarily a proprietary software battleground, we should know better. Apache, the phoenix that grew from the NCSA server, kept the open vision alive, keeping the standards honest, and not succumbing to proprietary embrace-and-extend strategies.But even more significantly, HTML, the language of web pages, opened participation to ordinary users, not just software developers. The "View Source" menu item migrated from Tim Berners-Lee's original browser, to Mosaic, and then on to Netscape Navigator and even Microsoft's Internet Explorer. Though no one thinks of HTML as an open source technology, its openness was absolutely key to the explosive spread of the web. Barriers to entry for "amateurs" were low, because anyone could look "over the shoulder" of anyone else producing a web page. Dynamic content created with interpreted languages continued the trend toward transparency.And more germane to my argument here, the fundamental architecture of hyperlinking ensures that the value of the web is created by its users.In this context, it's worth noting an observation originally made by Dan Bricklin in his paper,The Cornucopia of the Commons. There are three ways to build a large database, wrote Dan. The first, demonstrated by Yahoo, is to pay people to do it. The second, inspired by lessons from the open source community, is to get volunteers to perform the same task. The Open Directory Project, an open source Yahoo competitor, is the result. (Wikipedia provides another example.) But Napster demonstrates a third way. Because Napster set its defaults to automatically share any music that was downloaded, every user automatically helped to build the value of the shared database.This architectural insight may actually be more central to the success of open source than the more frequently cited appeal to volunteerism. The architecture of Linux, the Internet, and the World Wide Web are such that users pursuing their own "selfish" interests build collective value as an automatic byproduct. In other words, these technologies demonstrate some of the same network effect as eBay and Napster, simply through the way that they have been designed.These projects can be seen to have a natural architecture of participation. But as Amazon demonstrates, by consistent effort (as well as economic incentives such as the Associates program), it is possible to overlay such an architecture on a system that would not normally seem to possess it.
  • Long Tail Theory and Recommender SystemsThe following section deals with a brief introduction of long tail theory, an important concept utilized in e-commerce nowadays, and how major e-commerce sites like Amazon.com and Netflix are making use of recommender systems to target customers of non-popular products and generating significant profits out of them.Wikipedia Definition: The Long Tail is a retailing concept describing the niche strategy of selling a large number of unique items in relatively small quantities usually in addition to selling fewer popular items in large quantities.(Ref: http://en.wikipedia.org/wiki/The_Long_Tail)Figure 2. An example of power law graph showing popularity ranking of products. To the right is the long tail; to the left are the few popular products that dominate.(Ref: http://en.webrazzi.com/2009/09/18/recommendation-systems-increasing-profit-by-long-tail/)It is a known fact that a major number of online sales i.e. e-commerce and music are done for the most popular products. This is a crucial problem as products that sell alot, have a low profit margin therefore all competitors have to sell the same product for the same price. And the non-popular items on the other hand increase your storage costs as they occupy space and are never really sold as much as the popualr products.A solution could be that if; non-popular products are brought to the attention of right buyers with a successful mechanism a company’s profitability can increase significantly. Companies like Amazon that apply the Long Tail effect successfully, make most of their profit, from the long tail part of the chart (i.e. by selling non-popular products) and not from the best selling products. By the use of Recommnder systems, products that were not advertised typically because of their unpopularity are now introduced to interested buyers that might actually buy those products. Also calculated recommendations are used to design personalized pages for users or for email marketing. Some of the few examples of how recommender systems can be used to target the right customers:Personalized main page recommendation: it means having a main page for users designed according to their past activity. Second usage could be recommending similar products or new products for the user.Cross selling is also a vital source of generating profits. Such products should be demonstrated that are sold together.Through the use of recommender systems, companies can come up with such campaigns that are of interest to the user. For example, running personalized ad campaigns and sending customized e-mail recommendations to the customers. Benefits of Recommender SystemsSome of the major benefits which recommender systems have to offer for e-commerce sites are as follows: Most importantly unpopular products that aren’t typically advertised are introduced to interested buyers that might buy actually those products.Recommender system usage has increased the monetary benefits too. Companies that use recommendation systems have experienced a 10 to 35 percent increase in sales. Amazon has also reported that 35% of its sales are done through recommendation system according to 2006 sales figures of the company.(Ref:http://en.webrazzi.com/2009/09/18/recommendation-systems-increasing-profit-by-long-tail/)
  • IntroductionHow many times have you seen a web site and said, “This would be exactly what I wanted—if only . . .” If only you could combine the statistics here with data from your company’s earnings projections. If only you could take the addresses for those restaurants and plot them on one map. How often have you entered the date of a concert into your calendar with a single click instead of retyping? How often do you wish that you could make all the different parts of your digital world—your e-mail, your word processor documents, your photos, your search results, your maps, your presentations—work together more seamlessly? After all, it’s all digital and malleable information—shouldn’t it all just fit together?In fact, below the surface, all the data, web sites, and applications you use could fit together. This book teaches you how to forge those latent connections—to make the Web your own—by remixing information to create your own mashups. A mashup, in the words of the Wikipedia, is a web site or web application “that seamlessly combines content from more than one source into an integrated experience.” [1]Learning how to draw content from the Web together into new integrated interfaces and applications, whether for yourself or for other others, is the central concern of this book.Let’s look at a few examples to see how people are remixing data and services to make something new and useful:Housingmaps.com brings together Google Maps and the housing and rental listings from Craigslist.com. Note that it was invented by neither Google nor Craigslist but by an individual programmer, Paul Radamacher. Housingmaps.com adds to the functionality of Craigslist, which will show you on a map where a specific listing is located but not all the rentals or houses in an area. [2]Google Maps in Flickr (GMiF) brings together Flickr pictures, Google Maps, Google Earth, and the Firefox browser via Greasemonkey. [3]The Library Lookup bookmark is a JavaScript bookmarklet that connects Amazon.com and your local library catalog. [4]To create your own mashups and customize the Web, you will look at these examples in greater detailTaking a book you found on Amazon.com and instantly locating it in your local librarySynthesizing a single news feed from many news sources through Yahoo! PipesPosting Flickr photos to blogs with a click of a buttonDisplaying your photos in Google Maps and Google EarthUsing special extensions to Firefox to learn how to program Google MapsInserting extra information into web pages with GreasemonkeyPlotting stories from your favorite news source (such as the New York Times) on a mapMaking your own web site remixable so that others can create mashups from your contentCreating Google calendars from your event calendars from the WebStoring and retrieving your files from online storage (S3)Creating an online spreadsheet from your Amazon.com wishlistRecognizing and manipulating data embedded in web pagesAdding an event listed on the Web to your calendar and e-mailing it to other people with one mouse clickBuilding web search functionality into your own web applicationsRepublishing word documents that are custom-formatted for your web siteMashups are certainly hot right now, which is interesting because it makes you part of a shared undertaking, a movement. Mashups are fun and often educational. There’s delight in seeing familiar things brought together to create something new that is greater than the sum of its parts. Some mashups don’t necessarily ask to be taken that seriously. And yet mashups are also powerful—you can get a lot of functionality without a lot of effort. They might not be built to last forever, but you often can get what you need from them without having to invest more effort than you want to in the first place.The Web 2.0 MovementThe Web 2.0 bandwagon is an important reason why mashups are popular now. Mashups have been identified explicitly (under the phrases “remixable data source” and “the right to remix”) by Tim O’Reilly in “What is Web 2.0?” [5] Added to this, we have the development of what might be accurately thought of as “Web 2.0 technologies/mind-sets” to remix/reuse data, web services, and micro-applications to create hybrid applications. Recent developments bring us closer to enabling users to recombine digital content and services:Increasing availability of XML data sources and data formats in business, personal, and consumer applications (including office suites)Wide deployment of XML web servicesWidespread current interest in data remixing or mashupsAjax and the availability of JavaScript-based widgets and micro-applicationsEvolution of web browsers to enable greater extensibility (for example, Firefox extensions and Greasemonkey scripts)Explosive growth in “user-generated content” or “lead-user innovation”Wider conceptualization of the Internet as a platform (“Web 2.0”)Increased broadband accessThese developments have transformed creating mashups from being technically challenging to nearly mainstream. It is not that difficult to get going, but you need to know a bit about a fair number of things, and you need to be playful and somewhat adventurous.Will mashups remain cutting-edge forever? Undoubtedly, no, but not because they will prove to be an irrelevant fad but because the functionality we see in mashups will eventually be subsumed into the ordinary “what-we-expect-and-think-has-always-been-there” functionality of our electronic society.Moreover, mashups reflect deeper trends, even the deepest trends of human desire. As the quality, quantity, and diversity of information grow, users long for tools to access and manage this bewildering array of information. Many users will ultimately be satisfied by nothing less than an information environment that gives them seamless access to any digital content source, handles any content type, and applies any software service to this content. Consider, for example, what a collection of bloggers expressed as their desires for next-generation blogging tools: [6]Bloggers want tools that are utterly simple and allow them to blog everything that they can think, in any format, from any tool, from anywhere. Text is just the beginning: Bloggers want to branch out to multiple media types including rich and intelligent use of audio, photos, and video. With input, having a dialog box is also seen as just a starting place for some bloggers: everything from a visual tool to easy capture of things a blogger sees, hears, or reads point to desirable future user interfaces for new generations of blogging tools.Mashups are starting to forge this sought-after access and integration of data and tools—not only in the context of blogging but also to any point of interaction between users and content.
  • Hackability : (or ability totinker) Ability, for a tool or deviceto be modified in a way that wasnot intended by its inventor sothat users can invent new waysto use it. See also: Generativity.Hackability isthe ability to:• Participate and create instead of justconsume passively• Invent the future we want, not the onewe’re given (or sold)Cool things that areenabled by hackability• Wikipedia• OpenStreetMap• Flickr.com : 4 billion pictures, including120M under CC license• Free and Open-Source Software:development & distribution• and millions of other examples (LOLcats!)
  • Za razliku od mnogih Internet standarda i protokola, mashup nije nastao kao rezultat određenog projekta. Mashupi su nastali kada su korisnici (pre svega hakeri) počeli da kombinuju postojeće standarde i protokole na inovativne načine. Mashup se može definisati kao:Web stranica koja koristi Web tehnologije, pre svega JavaScript, PHP i XML, i koja predstavlja informacije iz različitih izvora, ili ih prikazuje na različit način.Na ovaj način korisnik ne mora da posećuje veliki broj lokacija na Internetu da bi dobio određenu informaciju (Feiler, 2008).2.2.1 Osnovna idejaMashup je Web aplikacija proizašla kombinovanjem podataka i/ili servisa iz više izvora. Sam termin, mashup, potiče iz muzičke industrije.Mada se mashupi razlikuju u korisničkom interfejsu i izvorima podataka mogu se izvući zajedničke karakteristike:Svi mashupi zadovoljavaju Representational State Transfer (RESTful) principe. Ključni element bilo kog mashupa jesu podaci koji se agregiraju i predstavljaju korisniku. Podaci mogu poticati iz Web servisa (koji serijalizuju podatke u XML ili JSON) i RSS feed-ova. Web servisi se koriste da pruže dodatne podatke ili da transformišu podatke koji se mešaju (Clarkin & Holmes).Za pojam mashup, kao i za mnoge druge reči koje se često mogu čuti u medijima, a posebno na Internetu (na primer i Web 2.0), ne postoji tačna formalna definicija koju je dalo neko telo koje se bavi standardima. Ukoliko bi pitali deset samoproklamovanih eksperata za mashupe da daju definiciju mashupa, dobili bi deset različitih definicija. Mashupi u svojoj biti rešavaju osnovni problem razmene podataka: pristup i kombinovanje podataka iz različitih internih i eksternih izvora na način koji nije unapred osmišljen.Većina poznatih mashupa je jednostavna: podaci iz jednog izvora se prikazuju na interaktivnoj mapi. Međutim, mashupi koji se koriste u preduzeću su prilično komplikovaniji, jer korisnici zahtevaju kombinovanje podataka iz aplikacija različitih proizvođača kako bi dobili složene informacije potrebne za poslovanje i donošenje različitih odluka (Crupi & Warner, Enterprise Mashups: The New Face of Your SOA, 2009).2.2.1.1 Definicije i pojam mashupaNeke od definicija mashupa su:Mashup (takođe, mash up i mash-up) je pesma ili kompozicija nastala spajanjem dve ili više pesama (Mashup (music));U web razvoju, mashup je Web aplikacija koja kombinuje podatke ili funkcionalnosti iz dva ili više izvora u jednu integralnu aplikaciju... (Mashup (web application hybrid));Video mashup (takođe i video mash-up) je kombinacija video sadržaja iz više izvora, koji obično nisu međusobno povezani... (Mashup (video));Web servis ili softverski alat koji kombinuje dva ili više servisa da bi kreirao potpuno nov servis. Pojam se koristi da bi opisao korisnički generisane sadržaje iz više izvora (Multiplatform Glossary).Iz pomenutih definicija može se zaključiti da mashup predstavlja novi, možda i neočekivani rezultat nastao kao proizvod procesa kreiranja nove informacije ili njenog predstavljanja na nov način, korišćenjem podataka iz više različitih izvora. Mashup se kreira korišćenjem i kombinovanjem podataka iz više izvora i servisa, s ciljem da se stvori nešto novo, ili da se dobije dodatna vrednost (Mashups: The What and Why). Osnovni cilj mashupa je da se korisniku omogući da sâm kombinuje i sjedini podatke na način koji mu odgovara (Bernal, 2009).Jedan od prvih primera mahupa je Housing-Maps. Na ovom Web sajtu mogu se dobiti informacije o kućama i stanovima koji mogu da se iznajme i/ili kupe. Te informacije potiču sa sajta Craiglists i predstavljene su na geografskoj mapi korišćenjem Google Maps API-a. Korisnik dobija rezultate prikazane na mapi izborom grada, cene, broja soba i ključnih reči.Veće mogućnosti ponovnog korišćenja, kombinovanja i mešanja sadržaja sa više različitih Web sajtova ponudio je Yahoo! Pipes7. To je alat koji omogućava agregiranje, komponovanje i izmenu sadržaja korišćenjem jednostavnih komandi.Na slici 4 prikazan je proces kreiranja mashupa.Slika 4. Mashupi (Bernal, 2009)Jedna od ključnih karakteristika mashupa je odvajanje korisničkog interfejsa od podataka i servisa. Na ovaj način mashup se može ponovo koristiti tako što je omogućen ponovni pristup podacima preko servisa koji se takođe mogu ponovo koristiti. Ideja ponovnog korišćenja čini poslovanje agilnijim i daje mogućnost kreiranja novih aplikacija po potrebi.Npr. Blondie Vs. The Doors, http://www.youtube.com/watch?v=dnhKPw2NXIw&feature=player_embedded#Representational State Transfer (REST) je stil softverske arhitekture za distribuirane hipermedia sisteme, kao što je WWW. REST arhitektura se sastoji od klijenata i servera. Klijent inicira zahtev serveru, server procesira zahtev i vraća adekvatan odgovor (Representational State Trensfer).http://www.housingmaps.com/ http://www.craigslist.org/about/siteshttp://maps.google.comhttp://pipes.yahoo.com/pipes/
  • I've long made the assertion that one of the central differences between the PC era and the Web 2.0 era is that once the internet becomes platform, rather than just an add-on to the PC, you can build applications that harness network effects, so that they become better the more people use them. I've used the phrase "harnessing collective intelligence " to frame this phenomenon.  Yesterday at the Web 2.0 Conference , I hosted a panel on this topic. It featured Jim Buckmaster  of Craigslist,Toni Schneider  of Automatic (Wordpress), Owen van Natta  of Facebook, and Richard Roseblatt , former chairman of Myspace, and now CEO of Demand Media.One of the threads we focused on in the discussion was the difference between "user generated content," which many people focus on, and a far broader, more thought-provoking understanding of how collective intelligence is put to work. Craig Kaplan of predictwallstreet.com  asked a question during the panel, and followed up with an email, which I thought was worth sharing:As I mentioned in my question to the panelists, I feel there is a big difference between user generated content and collective intelligence.  For example, PredictWallStreet.com focuses one million unique monthly visitors on predicting whether a stock will close up or down. With the help of our algorithms the community can outperform the market -- something most analysts can't do. That's not user-generated content, that's a cognitive community exhibiting super intelligent behavior.Wikipedia exhibits super intelligent behavior when it is more comprehensive and more up to date than encyclopedia Britannica. Britannica has the brand, but Wikipedia has the Brains on Board. And with very minimal software, Wikipedia directs millions of minds to create a new and better kind of encyclopedia. That's not just user-generated content. It's a cognitive community exhibiting super intelligent behavior.Together we form a super intelligence that is a lot smarter than any one of us alone. As you say, Web 2.0 truly is just the froth before the wave. I believe networks of super intelligent cognitive communities are our future.While I'm not sure I'd use the phrase "super-intelligent," I agree very much with Richard that there's a lot more to harnessing collective intelligence (the new HCI) than user generated content (UCG). Google's PageRank is HCI, but not UGC (although it is derived from the user-generated content of the WWW itself); Wordpress's akismet anti-spam plugin is HCI but not UGC; CraigsList's user moderation features are HCI applied to the problem of unbridled UGC.  Even sites that are very explicitly based on user-generated content, like MySpace, at their best bring in other aspects of HCI. During the panel, I mentioned Kathy Sierra's recent observation about MySpace , courtesy of her daughter: ""myspace keeps doing what everybody really wants, and it happens instantly.... As soon as you think of something, it's in there.... It's always evolving. It changes constantly. There's always something new." Richard responded that at MySpace, "product development is marketing." They test features on real customers in real time, trying to learn what they like by offering it to them, keeping what works, and changing what doesn't.As you can see from Richard's comment, Web 2.0 concepts like harnessing collective intelligence and lightweight, responsive software development are intimately related. They require new competencies, new development models, and new attitudes towards the application development process.
  • 7. Rich User ExperiencesAs early as Pei Wei's Viola browser in 1992, the web was being used to deliver "applets" and other kinds of active content within the web browser. Java's introduction in 1995 was framed around the delivery of such applets. JavaScript and then DHTML were introduced as lightweight ways to provide client side programmability and richer user experiences. Several years ago, Macromedia coined the term "Rich Internet Applications" (which has also been picked up by open source Flash competitor Laszlo Systems) to highlight the capabilities of Flash to deliver not just multimedia content but also GUI-style application experiences.However, the potential of the web to deliver full scale applications didn't hit the mainstream till Google introduced Gmail, quickly followed by Google Maps, web based applications with rich user interfaces and PC-equivalent interactivity. The collection of technologies used by Google was christened AJAX, in a seminal essay by Jesse James Garrett of web design firm Adaptive Path. He wrote:"Ajax isn't a technology. It's really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:standards-based presentation using XHTML and CSS;dynamic display and interaction using the Document Object Model;data interchange and manipulation using XML and XSLT;asynchronous data retrieval using XMLHttpRequest;and JavaScript binding everything together."Web 2.0 Design PatternsIn his book, A Pattern Language, Christopher Alexander prescribes a format for the concise description of the solution to architectural problems. He writes: "Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice."Users Add ValueThe key to competitive advantage in internet applications is the extent to which users add their own data to that which you provide. Therefore: Don't restrict your "architecture of participation" to software development. Involve your users both implicitly and explicitly in adding value to your application.Network Effects by DefaultOnly a small percentage of users will go to the trouble of adding value to your application. Therefore: Set inclusive defaults for aggregating user data as a side-effect of their use of the application.Some Rights Reserved. Intellectual property protection limits re-use and prevents experimentation. Therefore: When benefits come from collective adoption, not private restriction, make sure that barriers to adoption are low. Follow existing standards, and use licenses with as few restrictions as possible. Design for "hackability" and "remixability."The Perpetual BetaWhen devices and programs are connected to the internet, applications are no longer software artifacts, they are ongoing services. Therefore: Don't package up new features into monolithic releases, but instead add them on a regular basis as part of the normal user experience. Engage your users as real-time testers, and instrument the service so that you know how people use the new features.Cooperate, Don't ControlWeb 2.0 applications are built of a network of cooperating data services. Therefore: Offer web services interfaces and content syndication, and re-use the data services of others. Support lightweight programming models that allow for loosely-coupled systems.Software Above the Level of a Single DeviceThe PC is no longer the only access device for internet applications, and applications that are limited to a single device are less valuable than those that are connected.Therefore: Design your application from the get-go to integrate services across handheld devices, PCs, and internet servers.AJAX is also a key component of Web 2.0 applications such as Flickr, now part of Yahoo!, 37signals' applications basecamp and backpack, as well as other Google applications such as Gmail and Orkut. We're entering an unprecedented period of user interface innovation, as web developers are finally able to build web applications as rich as local PC-based applications.Interestingly, many of the capabilities now being explored have been around for many years. In the late '90s, both Microsoft and Netscape had a vision of the kind of capabilities that are now finally being realized, but their battle over the standards to be used made cross-browser applications difficult. It was only when Microsoft definitively won the browser wars, and there was a single de-facto browser standard to write to, that this kind of application became possible. And while Firefox has reintroduced competition to the browser market, at least so far we haven't seen the destructive competition over web standards that held back progress in the '90s.We expect to see many new web applications over the next few years, both truly novel applications, and rich web reimplementations of PC applications. Every platform change to date has also created opportunities for a leadership change in the dominant applications of the previous platform.Gmail has already provided some interesting innovations in email, combining the strengths of the web (accessible from anywhere, deep database competencies, searchability) with user interfaces that approach PC interfaces in usability. Meanwhile, other mail clients on the PC platform are nibbling away at the problem from the other end, adding IM and presence capabilities. How far are we from an integrated communications client combining the best of email, IM, and the cell phone, using VoIP to add voice capabilities to the rich capabilities of web applications? The race is on.It's easy to see how Web 2.0 will also remake the address book. A Web 2.0-style address book would treat the local address book on the PC or phone merely as a cache of the contacts you've explicitly asked the system to remember. Meanwhile, a web-based synchronization agent, Gmail-style, would remember every message sent or received, every email address and every phone number used, and build social networking heuristics to decide which ones to offer up as alternatives when an answer wasn't found in the local cache. Lacking an answer there, the system would query the broader social network.A Web 2.0 word processor would support wiki-style collaborative editing, not just standalone documents. But it would also support the rich formatting we've come to expect in PC-based word processors. Writely is a good example of such an application, although it hasn't yet gained wide traction.Nor will the Web 2.0 revolution be limited to PC applications. Salesforce.com demonstrates how the web can be used to deliver software as a service, in enterprise scale applications such as CRM.The competitive opportunity for new entrants is to fully embrace the potential of Web 2.0. Companies that succeed will create applications that learn from their users, using an architecture of participation to build a commanding advantage not just in the software interface, but in the richness of the shared data.
  • AJAX (Asynchronous JavaScript and XML), jedna je od najznačajnijih tehnologija koja je omogućila da Web aplikacije dobiju funkcionalnosti i izgled koji postoje u desktop aplikacijama. Ključni termin u akronimu AJAX je asinhroni (eng. asyinchronous). Odnosi se na činjenicu da se sadržaj u browseru može učitati na asinhroni način. To znači da više ne postoji potreba za ponovno učitavanje celih stranica, što je vrlo uobičajeno na Webu. Na ovaj način AJAX donosi novu paradigmu interakcije korisnika sa browserom.On omogućava: prezentaciju zasnovanu na standardima XHTML i CSS;dinamički prikaz i interakciju korišćenjem DOM (Document Object Model);razmenu i manipulaciju podacima korišćenjem XMLHttp Request i JavaScript, koji sve to povezuju (Garrett, 2005).Poznato je da su se još u Viola Browseru iz 1992. godine, preko Weba isporučivali appleti, i druge vrste aktivnog sadržaja. U Javi se koriste appleti, programi koji se uključuju u HTML stranu. Za razliku od Jave, JavaScript i DHTML predstavljaju tanke, lake (eng. lightweight) tehnologije, koje obezbeđuju utisak desktop aplikacije u browseru. Web sajtovi socijalnih mreža spadaju u aplikacije označene kao Rich Internet Applications (RIA), koje imaju mnoge karakteristike desktop aplikacija. Kod ovih aplikacija veći deo poslovne logike i kontrole smešten je na strani klijenta (u Web browseru, što je umnogome omogućeno tehnologijama kao što su AJAX i Comet.Termin Rich Internet Applications prvi put je upotrebljen u firmi Macromedia (sada integrisani sa kompanijom Adobe), kako bi se naglasile mogućnosti njihovog proizvoda Flash u obezbeđivanju ne samo multimedijalnog sadržaja, već i interakcije korisnika sa programom koja postoji u desktop aplikacijama. Međutim, tek pošto je Google predstavio Gmail, i ubrzo zatim Google Maps, Web aplikacije sa korisničkim interfejsom i interaktivnošću kao na PC-u, dostigle su pravi potencijal (Bernal, 2009).DOM (Document Object Model) jeste konvencija koja ne zavisi od jezika i platforme koji se koriste, i služi za predstavljanje i interakciju sa objektima u okviru HTML, XHTML i XML dokumenata.XMLHttpRequest (XHR) jeste DOM API koji se može koristiti u okviru skript jezika web browsera, kao što je npr. JavaScript. On služi za direktno slanje HTTP ili HTTPS zahteva (eng. request) web serveru i učitavanje odgovora servera natrag u skript jezik (XMLHttpRequest, 2010).Comet je kovanica koja se koristi za opis modela Web aplikacije, gde se koristi stalni HTTP request, koji web serveru omogućava da šalje podatke web browseru, a da ihbrowser ne zahteva eksplicitno. Comet obuhvata više tehnologija za postizanje takve interakcije (Comet (programming), 2010).http://www.adobe.com/
  • Perpetual beta has come to be associated with the development and release of a service in which constant updates are the foundation for the habitability or usability of a service. According to publisher and open source advocate Tim O'Reilly:"Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, 'release early and release often', in fact has morphed into an even more radical position, 'the perpetual beta', in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a 'Beta' logo for years at a time."
  • . Data is the Next Intel InsideEvery significant internet application to date has been backed by a specialized database: Google's web crawl, Yahoo!'s directory (and web crawl), Amazon's database of products, eBay's database of products and sellers, MapQuest's map databases, Napster's distributed song database. As Hal Varian remarked in a personal conversation last year, "SQL is the new HTML." Database management is a core competency of Web 2.0 companies, so much so that we have sometimes referred to these applications as "infoware" rather than merely software.This fact leads to a key question: Who owns the data?In the internet era, one can already see a number of cases where control over the database has led to market control and outsized financial returns. The monopoly on domain name registry initially granted by government fiat to Network Solutions (later purchased by Verisign) was one of the first great moneymakers of the internet. While we've argued that business advantage via controlling software APIs is much more difficult in the age of the internet, control of key data sources is not, especially if those data sources are expensive to create or amenable to increasing returns via network effects.Look at the copyright notices at the base of every map served by MapQuest, maps.yahoo.com, maps.msn.com, or maps.google.com, and you'll see the line "Maps copyright NavTeq, TeleAtlas," or with the new satellite imagery services, "Images copyright Digital Globe." These companies made substantial investments in their databases (NavTeq alone reportedly invested $750 million to build their database of street addresses and directions. Digital Globe spent $500 million to launch their own satellite to improve on government-supplied imagery.) NavTeq has gone so far as to imitate Intel's familiar Intel Inside logo: Cars with navigation systems bear the imprint, "NavTeq Onboard." Data is indeed the Intel Inside of these applications, a sole source component in systems whose software infrastructure is largely open source or otherwise commodified.The now hotly contested web mapping arena demonstrates how a failure to understand the importance of owning an application's core data will eventually undercut its competitive position. MapQuest pioneered the web mapping category in 1995, yet when Yahoo!, and then Microsoft, and most recently Google, decided to enter the market, they were easily able to offer a competing application simply by licensing the same data.Contrast, however, the position of Amazon.com. Like competitors such as Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon "embraced and extended" their data suppliers.Imagine if MapQuest had done the same thing, harnessing their users to annotate maps and directions, adding layers of value. It would have been much more difficult for competitors to enter the market just by licensing the base data.The recent introduction of Google Maps provides a living laboratory for the competition between application vendors and their data suppliers. Google's lightweight programming model has led to the creation of numerous value-added services in the form of mashups that link Google Maps with other internet-accessible data sources. Paul Rademacher's housingmaps.com, which combines Google Maps with Craigslist apartment rental and home purchase data to create an interactive housing search tool, is the pre-eminent example of such a mashup.At present, these mashups are mostly innovative experiments, done by hackers. But entrepreneurial activity follows close behind. And already, one can see that for at least one class of developer, Google has taken the role of data source away from Navteq and inserted themselves as a favored intermediary. We expect to see battles between data suppliers and application vendors in the next few years, as both realize just how important certain classes of data will become as building blocks for Web 2.0 applications.The race is on to own certain classes of core data: location, identity, calendaring of public events, product identifiers and namespaces. In many cases, where there is significant cost to create the data, there may be an opportunity for an Intel Inside style play, with a single source for the data. In others, the winner will be the company that first reaches critical mass via user aggregation, and turns that aggregated data into a system service.For example, in the area of identity, PayPal, Amazon's 1-click, and the millions of users of communications systems, may all be legitimate contenders to build a network-wide identity database. (In this regard, Google's recent attempt to use cell phone numbers as an identifier for Gmail accounts may be a step towards embracing and extending the phone system.) Meanwhile, startups like Sxip are exploring the potential of federated identity, in quest of a kind of "distributed 1-click" that will provide a seamless Web 2.0 identity subsystem. In the area of calendaring, EVDB is an attempt to build the world's largest shared calendar via a wiki-style architecture of participation. While the jury's still out on the success of any particular startup or approach, it's clear that standards and solutions in these areas, effectively turning certain classes of data into reliable subsystems of the "internet operating system", will enable the next generation of applications.A further point must be noted with regard to data, and that is user concerns about privacy and their rights to their own data. In many of the early web applications, copyright is only loosely enforced. For example, Amazon lays claim to any reviews submitted to the site, but in the absence of enforcement, people may repost the same review elsewhere. However, as companies begin to realize that control over data may be their chief source of competitive advantage, we may see heightened attempts at control.Much as the rise of proprietary software led to the Free Software movement, we expect the rise of proprietary databases to result in a Free Data movement within the next decade. One can see early signs of this countervailing trend in open data projects such as Wikipedia, the Creative Commons, and in software projects like Greasemonkey, which allow users to take control of how data is displayed on their computer.
  • Flickr is a popular, easy to use photo sharing service that was acquired by Yahoo, and it allowsusers to easily upload, tag, and share photos, and also to provide feedback and ratings. Thetagging allows generating dynamic groups of photos (“Sunset,” “Thailand,” for example), and therating system accumulates feedback to rank pictures in popularity. In a similar way, several otherservices allow users to share, tag, and rate photos as well as other content, including audio andvideo. Google, for example, invites Internet users to upload heir video content with optionaltagging, thus creating a large archive of tagged content.
  • Social bookmarking is a method for Internet users to organize, store, manage and search for bookmarks of resources online. Many online bookmark management services have launched since 1996;Delicious, founded in 2003, popularized the terms "social bookmarking" and "tagging". Tagging is a significant feature of social bookmarking systems, enabling users to organize their bookmarks in flexible ways and develop shared vocabularies known as folksonomies.Unlike file sharing, the resources themselves aren't shared, merely bookmarks that reference them.Descriptions may be added to these bookmarks in the form of metadata, so users may understand the content of the resource without first needing to download it for themselves. Such descriptions may be free text comments, votes in favour of or against its quality, or tags that collectively or collaboratively become a folksonomy. Folksonomy is also called social tagging, "the process by which many users add metadata in the form of keywords to shared content".[1]In a social bookmarking system, users save links to web pages that they want to remember and/or share. These bookmarks are usually public, and can be saved privately, shared only with specified people or groups, shared only inside certain networks, or another combination of public and private domains. The allowed people can usually view these bookmarks chronologically, by category or tags, or via a search engine.Most social bookmark services encourage users to organize their bookmarks with informal tags instead of the traditional browser-based system of folders, although some services feature categories/folders or a combination of folders and tags. They also enable viewing bookmarks associated with a chosen tag, and include information about the number of users who have bookmarked them. Some social bookmarking services also draw inferences from the relationship of tags to create clusters of tags or bookmarks.Many social bookmarking services provide web feeds for their lists of bookmarks, including lists organized by tags. This allows subscribers to become aware of new bookmarks as they are saved, shared, and tagged by other users.As these services have matured and grown more popular, they have added extra features such as ratings and comments on bookmarks, the ability to import and export bookmarks from browsers, emailing of bookmarks, web annotation, and groups or other social network features.[2][edit]HistoryThe concept of shared online bookmarks dates back to April 1996 with the launch of itList,[3] the features of which included public and private bookmarks.[4] Within the next three years, online bookmark services became competitive, with venture-backed companies such as Backflip, Blink, Clip2, ClickMarks, HotLinks, and others entering the market.[5][6] They provided folders for organizing bookmarks, and some services automatically sorted bookmarks into folders (with varying degrees of accuracy).[7] Blink included browser buttons for saving bookmarks;[8] Backflip enabled users to email their bookmarks to others[9] and displayed "Backflip this page" buttons on partner websites.[10] Lacking viable revenue models, this early generation of social bookmarking companies failed as thedot-com bubble burst — Backflip closed citing "economic woes at the start of the 21st century".[11] In 2005, the founder of Blink said, "I don't think it was that we were 'too early' or that we got killed when the bubble burst. I believe it all came down to product design, and to some very slight differences in approach."[12]Founded in 2003, Delicious (then called del.icio.us) pioneered tagging[13] and coined the term social bookmarking. In 2004, as Delicious began to take off, similar services Furl and Simpy were released, along with Citeulike and Connotea (sometimes called social citation services) and the related recommendation system Stumbleupon. In 2006, Ma.gnolia (later renamed to Gnolia), Blue Dot(later renamed to Faves), Mister Wong, and Diigo entered the bookmarking field, and Connectbeam included a social bookmarking and tagging service aimed at businesses and enterprises. In 2007,IBM released its Lotus Connections product.[14] In 2009, Pinboard launched as a bookmarking service with paid accounts.[15] As of 2011, Furl, Simpy, Gnolia, and Faves are no longer active services.Digg was founded in 2004 with a related system for sharing and ranking social news, followed with Reddit in 2005 and Newsvine in 2006.[edit]FolksonomyMain article: FolksonomyA simple form of shared vocabularies does emerge in social bookmarking systems (folksonomy). Collaborative tagging exhibits a form of complex systems (or self-organizing) dynamics.[16] Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources have been shown to converge over time to stable power lawdistributions.[16]. Once such stable distributions form, the correlations between different tags can be examined to construct simple folksonomy graphs, which can be efficiently partitioned to obtain a form of community or shared vocabularies.[17] While such vocabularies suffer from some of the informality problems described below, they can be seen as emerging from the decentralized actions of many users, as a form of crowdsourcing.From the point of view of search data, there are drawbacks to such tag-based systems: no standard set of keywords (i.e., a folksonomy instead of a controlled vocabulary), no standard for the structure of such tags (e.g., singular vs. plural, capitalization), mistagging due to spelling errors, tags that can have more than one meaning, unclear tags due to synonym/antonym confusion, unorthodox and personalized tag schemata from some users, and no mechanism for users to indicate hierarchical relationships between tags (e.g., a site might be labeled as both cheese and cheddar, with no mechanism that might indicate that cheddar is a refinement or sub-class of cheese).[edit]UsesFor users, social bookmarking can be useful as a way to access a consolidated set of bookmarks from various computers, organize large numbers of bookmarks, and share bookmarks with contacts. Libraries have found social bookmarking to be useful as an easy way to provide lists of informative links to patrons.[18]
  • Social bookmarking is a method for Internet users to organize, store, manage and search for bookmarks of resources online. Many online bookmark management services have launched since 1996;Delicious, founded in 2003, popularized the terms "social bookmarking" and "tagging". Tagging is a significant feature of social bookmarking systems, enabling users to organize their bookmarks in flexible ways and develop shared vocabularies known as folksonomies.Unlike file sharing, the resources themselves aren't shared, merely bookmarks that reference them.Descriptions may be added to these bookmarks in the form of metadata, so users may understand the content of the resource without first needing to download it for themselves. Such descriptions may be free text comments, votes in favour of or against its quality, or tags that collectively or collaboratively become a folksonomy. Folksonomy is also called social tagging, "the process by which many users add metadata in the form of keywords to shared content".[1]In a social bookmarking system, users save links to web pages that they want to remember and/or share. These bookmarks are usually public, and can be saved privately, shared only with specified people or groups, shared only inside certain networks, or another combination of public and private domains. The allowed people can usually view these bookmarks chronologically, by category or tags, or via a search engine.Most social bookmark services encourage users to organize their bookmarks with informal tags instead of the traditional browser-based system of folders, although some services feature categories/folders or a combination of folders and tags. They also enable viewing bookmarks associated with a chosen tag, and include information about the number of users who have bookmarked them. Some social bookmarking services also draw inferences from the relationship of tags to create clusters of tags or bookmarks.Many social bookmarking services provide web feeds for their lists of bookmarks, including lists organized by tags. This allows subscribers to become aware of new bookmarks as they are saved, shared, and tagged by other users.As these services have matured and grown more popular, they have added extra features such as ratings and comments on bookmarks, the ability to import and export bookmarks from browsers, emailing of bookmarks, web annotation, and groups or other social network features.[2][edit]HistoryThe concept of shared online bookmarks dates back to April 1996 with the launch of itList,[3] the features of which included public and private bookmarks.[4] Within the next three years, online bookmark services became competitive, with venture-backed companies such as Backflip, Blink, Clip2, ClickMarks, HotLinks, and others entering the market.[5][6] They provided folders for organizing bookmarks, and some services automatically sorted bookmarks into folders (with varying degrees of accuracy).[7] Blink included browser buttons for saving bookmarks;[8] Backflip enabled users to email their bookmarks to others[9] and displayed "Backflip this page" buttons on partner websites.[10] Lacking viable revenue models, this early generation of social bookmarking companies failed as thedot-com bubble burst — Backflip closed citing "economic woes at the start of the 21st century".[11] In 2005, the founder of Blink said, "I don't think it was that we were 'too early' or that we got killed when the bubble burst. I believe it all came down to product design, and to some very slight differences in approach."[12]Founded in 2003, Delicious (then called del.icio.us) pioneered tagging[13] and coined the term social bookmarking. In 2004, as Delicious began to take off, similar services Furl and Simpy were released, along with Citeulike and Connotea (sometimes called social citation services) and the related recommendation system Stumbleupon. In 2006, Ma.gnolia (later renamed to Gnolia), Blue Dot(later renamed to Faves), Mister Wong, and Diigo entered the bookmarking field, and Connectbeam included a social bookmarking and tagging service aimed at businesses and enterprises. In 2007,IBM released its Lotus Connections product.[14] In 2009, Pinboard launched as a bookmarking service with paid accounts.[15] As of 2011, Furl, Simpy, Gnolia, and Faves are no longer active services.Digg was founded in 2004 with a related system for sharing and ranking social news, followed with Reddit in 2005 and Newsvine in 2006.[edit]FolksonomyMain article: FolksonomyA simple form of shared vocabularies does emerge in social bookmarking systems (folksonomy). Collaborative tagging exhibits a form of complex systems (or self-organizing) dynamics.[16] Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources have been shown to converge over time to stable power lawdistributions.[16]. Once such stable distributions form, the correlations between different tags can be examined to construct simple folksonomy graphs, which can be efficiently partitioned to obtain a form of community or shared vocabularies.[17] While such vocabularies suffer from some of the informality problems described below, they can be seen as emerging from the decentralized actions of many users, as a form of crowdsourcing.From the point of view of search data, there are drawbacks to such tag-based systems: no standard set of keywords (i.e., a folksonomy instead of a controlled vocabulary), no standard for the structure of such tags (e.g., singular vs. plural, capitalization), mistagging due to spelling errors, tags that can have more than one meaning, unclear tags due to synonym/antonym confusion, unorthodox and personalized tag schemata from some users, and no mechanism for users to indicate hierarchical relationships between tags (e.g., a site might be labeled as both cheese and cheddar, with no mechanism that might indicate that cheddar is a refinement or sub-class of cheese).[edit]UsesFor users, social bookmarking can be useful as a way to access a consolidated set of bookmarks from various computers, organize large numbers of bookmarks, and share bookmarks with contacts. Libraries have found social bookmarking to be useful as an easy way to provide lists of informative links to patrons.[18]
  • The name "PageRank" is a trademark of Google, and the PageRank process has been patented (U.S. Patent 6,285,999 ). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; the shares were sold in 2005 for $336 million.[2][3]eBayeBay is an online marketplace, a forum for the exchange of goods. The feedback system on eBay asks each user to post his opinion (positive or negative) on the person with whom he transacted. Every place a user's system handle ("ID") is displayed, his feedback is displayed with it.Since having primarily positive feedback will improve a user's reputation and therefore make other users more comfortable in dealing with him, users are encouraged to behave in acceptable ways—that is, by dealing squarely with other users, both as buyers and as sellers.Most users are extremely averse to negative feedback and will go to great lengths to avoid it. There is even such a thing as feedback blackmail, in which a party to a transaction threatens negative feedback to gain an unfair concession. The fear of getting negative feedback is so great that many users automatically leave positive feedback, with strongly worded comments, in hopes of getting the same in return. Thus, research has shown that a very large number (greater than 98%) of all transactions result in positive feedback. Academic researchers have called the entire eBay system into question based on these results.The main result of the eBay reputation management system is that buyers and sellers are generally honest. There are abuses, but not to the extent that there might be in a completely open or unregulated marketplace[žAmazon allows users to submit reviews to the web page of each product. Reviewers must rate the product on a rating scale from one to five stars. As with most rating scales, one star stands for the product being abysmal, five stars meaning that the item is stellar. Amazon provides a badging option for reviewers which indicate the real name of the reviewer (based on confirmation of a credit card account) or which indicate that the reviewer is one of the top reviewers by popularity. Customers may comment or vote on the reviews, indicating whether or not they found it helpful to them. If a review is given enough "helpful" hits, it appears on the front page of the product. A problem has been created by Amazon's habit of copying reviews onto pages on other editions of the 'same' book.[citation needed] But often the book is not the 'same' at all. Currently (Nov. 2011) editions of the new translation of the Roman Missal are accompanied on both amazon.com and amazon.co.uk by old reviews of the old translation, many of which advise the purchaser to wait till the new translation appears, and discuss details of content and presentation that have nothing to do with the new edition.[citation needed]
  • Most users are extremely averse to negative feedback and will go to great lengths to avoid it. There is even such a thing as feedback blackmail, in which a party to a transaction threatens negative feedback to gain an unfair concession. The fear of getting negative feedback is so great that many users automatically leave positive feedback, with strongly worded comments, in hopes of getting the same in return. Thus, research has shown that a very large number (greater than 98%) of all transactions result in positive feedback. Academic researchers have called the entire eBay system into question based on these results.The main result of the eBay reputation management system is that buyers and sellers are generally honest. There are abuses, but not to the extent that there might be in a completely open or unregulated marketplace[ž
  • Blogging and the Wisdom of CrowdsOne of the most highly touted features of the Web 2.0 era is the rise of blogging. Personal home pages have been around since the early days of the web, and the personal diary and daily opinion column around much longer than that, so just what is the fuss all about?At its most basic, a blog is just a personal home page in diary format. But as Rich Skrentanotes, the chronological organization of a blog "seems like a trivial difference, but it drives an entirely different delivery, advertising and value chain."One of the things that has made a difference is a technology called RSS. RSS is the most significant advance in the fundamental architecture of the web since early hackers realized that CGI could be used to create database-backed websites. RSS allows someone to link not just to a page, but to subscribe to it, with notification every time that page changes. Skrenta calls this "the incremental web." Others call it the "live web".Now, of course, "dynamic websites" (i.e., database-backed sites with dynamically generated content) replaced static web pages well over ten years ago. What's dynamic about the live web are not just the pages, but the links. A link to a weblog is expected to point to a perennially changing page, with "permalinks" for any individual entry, and notification for each change. An RSS feed is thus a much stronger link than, say a bookmark or a link to a single page.The Architecture of ParticipationSome systems are designed to encourage participation. In his paper, The Cornucopia of the Commons, Dan Bricklin noted that there are three ways to build a large database. The first, demonstrated by Yahoo!, is to pay people to do it. The second, inspired by lessons from the open source community, is to get volunteers to perform the same task. The Open Directory Project, an open source Yahoo competitor, is the result. But Napster demonstrated a third way. Because Napster set its defaults to automatically serve any music that was downloaded, every user automatically helped to build the value of the shared database. This same approach has been followed by all other P2P file sharing services.One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application. As noted above, they build systems that get better the more people use them.Mitch Kapor once noted that "architecture is politics." Participation is intrinsic to Napster, part of its fundamental architecture.This architectural insight may also be more central to the success of open source software than the more frequently cited appeal to volunteerism. The architecture of the internet, and the World Wide Web, as well as of open source software projects like Linux, Apache, and Perl, is such that users pursuing their own "selfish" interests build collective value as an automatic byproduct. Each of these projects has a small core, well-defined extension mechanisms, and an approach that lets any well-behaved component be added by anyone, growing the outer layers of what Larry Wall, the creator of Perl, refers to as "the onion." In other words, these technologies demonstrate network effects, simply through the way that they have been designed.These projects can be seen to have a natural architecture of participation. But as Amazon demonstrates, by consistent effort (as well as economic incentives such as the Associates program), it is possible to overlay such an architecture on a system that would not normally seem to possess it.RSS also means that the web browser is not the only means of viewing a web page. While some RSS aggregators, such as Bloglines, are web-based, others are desktop clients, and still others allow users of portable devices to subscribe to constantly updated content.RSS is now being used to push not just notices of new blog entries, but also all kinds of data updates, including stock quotes, weather data, and photo availability. This use is actually a return to one of its roots: RSS was born in 1997 out of the confluence of Dave Winer's "Really Simple Syndication" technology, used to push out blog updates, and Netscape's "Rich Site Summary", which allowed users to create custom Netscape home pages with regularly updated data flows. Netscape lost interest, and the technology was carried forward by blogging pioneer Userland, Winer's company. In the current crop of applications, we see, though, the heritage of both parents.But RSS is only part of what makes a weblog different from an ordinary web page. Tom Coates remarks on the significance of the permalink:It may seem like a trivial piece of functionality now, but it was effectively the device that turned weblogs from an ease-of-publishing phenomenon into a conversational mess of overlapping communities. For the first time it became relatively easy to gesture directly at a highly specific post on someone else's site and talk about it. Discussion emerged. Chat emerged. And - as a result - friendships emerged or became more entrenched. The permalink was the first - and most successful - attempt to build bridges between weblogs.In many ways, the combination of RSS and permalinks adds many of the features of NNTP, the Network News Protocol of the Usenet, onto HTTP, the web protocol. The "blogosphere" can be thought of as a new, peer-to-peer equivalent to Usenet and bulletin-boards, the conversational watering holes of the early internet. Not only can people subscribe to each others' sites, and easily link to individual comments on a page, but also, via a mechanism known as trackbacks, they can see when anyone else links to their pages, and can respond, either with reciprocal links, or by adding comments.Interestingly, two-way links were the goal of early hypertext systems like Xanadu. Hypertext purists have celebrated trackbacks as a step towards two way links. But note that trackbacks are not properly two-way--rather, they are really (potentially) symmetrical one-way links that create the effect of two way links. The difference may seem subtle, but in practice it is enormous. Social networking systems like Friendster, Orkut, and LinkedIn, which require acknowledgment by the recipient in order to establish a connection, lack the same scalability as the web. As noted by Caterina Fake, co-founder of the Flickr photo sharing service, attention is only coincidentally reciprocal. (Flickr thus allows users to set watch lists--any user can subscribe to any other user's photostream via RSS. The object of attention is notified, but does not have to approve the connection.)If an essential part of Web 2.0 is harnessing collective intelligence, turning the web into a kind of global brain, the blogosphere is the equivalent of constant mental chatter in the forebrain, the voice we hear in all of our heads. It may not reflect the deep structure of the brain, which is often unconscious, but is instead the equivalent of conscious thought. And as a reflection of conscious thought and attention, the blogosphere has begun to have a powerful effect.First, because search engines use link structure to help predict useful pages, bloggers, as the most prolific and timely linkers, have a disproportionate role in shaping search engine results. Second, because the blogging community is so highly self-referential, bloggers paying attention to other bloggers magnifies their visibility and power. The "echo chamber" that critics decry is also an amplifier.If it were merely an amplifier, blogging would be uninteresting. But like Wikipedia, blogging harnesses collective intelligence as a kind of filter. What James Suriowecki calls "the wisdom of crowds" comes into play, and much as PageRank produces better results than analysis of any individual document, the collective attention of the blogosphere selects for value.While mainstream media may see individual blogs as competitors, what is really unnerving is that the competition is with the blogosphere as a whole. This is not just a competition between sites, but a competition between business models. The world of Web 2.0 is also the world of what Dan Gillmor calls "we, the media," a world in which "the former audience", not a few people in a back room, decides what's important.
  • Blogging and the Wisdom of CrowdsOne of the most highly touted features of the Web 2.0 era is the rise of blogging. Personal home pages have been around since the early days of the web, and the personal diary and daily opinion column around much longer than that, so just what is the fuss all about?At its most basic, a blog is just a personal home page in diary format. But as Rich Skrentanotes, the chronological organization of a blog "seems like a trivial difference, but it drives an entirely different delivery, advertising and value chain."One of the things that has made a difference is a technology called RSS. RSS is the most significant advance in the fundamental architecture of the web since early hackers realized that CGI could be used to create database-backed websites. RSS allows someone to link not just to a page, but to subscribe to it, with notification every time that page changes. Skrenta calls this "the incremental web." Others call it the "live web".Now, of course, "dynamic websites" (i.e., database-backed sites with dynamically generated content) replaced static web pages well over ten years ago. What's dynamic about the live web are not just the pages, but the links. A link to a weblog is expected to point to a perennially changing page, with "permalinks" for any individual entry, and notification for each change. An RSS feed is thus a much stronger link than, say a bookmark or a link to a single page.The Architecture of ParticipationSome systems are designed to encourage participation. In his paper, The Cornucopia of the Commons, Dan Bricklin noted that there are three ways to build a large database. The first, demonstrated by Yahoo!, is to pay people to do it. The second, inspired by lessons from the open source community, is to get volunteers to perform the same task. The Open Directory Project, an open source Yahoo competitor, is the result. But Napster demonstrated a third way. Because Napster set its defaults to automatically serve any music that was downloaded, every user automatically helped to build the value of the shared database. This same approach has been followed by all other P2P file sharing services.One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application. As noted above, they build systems that get better the more people use them.Mitch Kapor once noted that "architecture is politics." Participation is intrinsic to Napster, part of its fundamental architecture.This architectural insight may also be more central to the success of open source software than the more frequently cited appeal to volunteerism. The architecture of the internet, and the World Wide Web, as well as of open source software projects like Linux, Apache, and Perl, is such that users pursuing their own "selfish" interests build collective value as an automatic byproduct. Each of these projects has a small core, well-defined extension mechanisms, and an approach that lets any well-behaved component be added by anyone, growing the outer layers of what Larry Wall, the creator of Perl, refers to as "the onion." In other words, these technologies demonstrate network effects, simply through the way that they have been designed.These projects can be seen to have a natural architecture of participation. But as Amazon demonstrates, by consistent effort (as well as economic incentives such as the Associates program), it is possible to overlay such an architecture on a system that would not normally seem to possess it.RSS also means that the web browser is not the only means of viewing a web page. While some RSS aggregators, such as Bloglines, are web-based, others are desktop clients, and still others allow users of portable devices to subscribe to constantly updated content.RSS is now being used to push not just notices of new blog entries, but also all kinds of data updates, including stock quotes, weather data, and photo availability. This use is actually a return to one of its roots: RSS was born in 1997 out of the confluence of Dave Winer's "Really Simple Syndication" technology, used to push out blog updates, and Netscape's "Rich Site Summary", which allowed users to create custom Netscape home pages with regularly updated data flows. Netscape lost interest, and the technology was carried forward by blogging pioneer Userland, Winer's company. In the current crop of applications, we see, though, the heritage of both parents.But RSS is only part of what makes a weblog different from an ordinary web page. Tom Coates remarks on the significance of the permalink:It may seem like a trivial piece of functionality now, but it was effectively the device that turned weblogs from an ease-of-publishing phenomenon into a conversational mess of overlapping communities. For the first time it became relatively easy to gesture directly at a highly specific post on someone else's site and talk about it. Discussion emerged. Chat emerged. And - as a result - friendships emerged or became more entrenched. The permalink was the first - and most successful - attempt to build bridges between weblogs.In many ways, the combination of RSS and permalinks adds many of the features of NNTP, the Network News Protocol of the Usenet, onto HTTP, the web protocol. The "blogosphere" can be thought of as a new, peer-to-peer equivalent to Usenet and bulletin-boards, the conversational watering holes of the early internet. Not only can people subscribe to each others' sites, and easily link to individual comments on a page, but also, via a mechanism known as trackbacks, they can see when anyone else links to their pages, and can respond, either with reciprocal links, or by adding comments.Interestingly, two-way links were the goal of early hypertext systems like Xanadu. Hypertext purists have celebrated trackbacks as a step towards two way links. But note that trackbacks are not properly two-way--rather, they are really (potentially) symmetrical one-way links that create the effect of two way links. The difference may seem subtle, but in practice it is enormous. Social networking systems like Friendster, Orkut, and LinkedIn, which require acknowledgment by the recipient in order to establish a connection, lack the same scalability as the web. As noted by Caterina Fake, co-founder of the Flickr photo sharing service, attention is only coincidentally reciprocal. (Flickr thus allows users to set watch lists--any user can subscribe to any other user's photostream via RSS. The object of attention is notified, but does not have to approve the connection.)If an essential part of Web 2.0 is harnessing collective intelligence, turning the web into a kind of global brain, the blogosphere is the equivalent of constant mental chatter in the forebrain, the voice we hear in all of our heads. It may not reflect the deep structure of the brain, which is often unconscious, but is instead the equivalent of conscious thought. And as a reflection of conscious thought and attention, the blogosphere has begun to have a powerful effect.First, because search engines use link structure to help predict useful pages, bloggers, as the most prolific and timely linkers, have a disproportionate role in shaping search engine results. Second, because the blogging community is so highly self-referential, bloggers paying attention to other bloggers magnifies their visibility and power. The "echo chamber" that critics decry is also an amplifier.If it were merely an amplifier, blogging would be uninteresting. But like Wikipedia, blogging harnesses collective intelligence as a kind of filter. What James Suriowecki calls "the wisdom of crowds" comes into play, and much as PageRank produces better results than analysis of any individual document, the collective attention of the blogosphere selects for value.While mainstream media may see individual blogs as competitors, what is really unnerving is that the competition is with the blogosphere as a whole. This is not just a competition between sites, but a competition between business models. The world of Web 2.0 is also the world of what Dan Gillmor calls "we, the media," a world in which "the former audience", not a few people in a back room, decides what's important.
  • “IN 2008, IF YOU’RE NOT ONA SOCIAL NETWORKINGSITE, YOU’RE NOT ON THEINTERNET.”􀀂The old communication modelwas a monologue78%􀀁OF PEOPLE TRUST THERECOMMENDATIONS OFOTHER CONSUMERS.􀀁NIELSEN “TRUST IN ADVERTISING” REPORT, OCTOBER 2007􀀁The new communication modelis a dialogue.􀀂PEOPLE ARETALKING ABOUTYOUR BRAND. 􀀁RIGHT NOW.􀀁􀀁
  • “Any blog thatspins the truth willbe found out. In aworld of socialmedia honesty isthe only policy.”
  • Pojava Web 2.0, pored primene novih tehnologija, uticala je i na kreiranje novih poslovnih modela. Većina velikih kompanija, medijskih kuća, pa čak i političara još uvek traži načine da iskoristi prednosti ovih tehnologija kako bi doprla do svojih kupaca i unapredila komunikaciju s njima. Na primer, Toyota nakon problema nastalih s delovima na pojedinim modelima automobila, koristi TwetMeme stranicu da bi povratila svoj ugled i poverenje svojih kupaca. S druge strane propustila je priliku da svoju stranicu profila na Facebook-u iskoristi da predupredi stvaranje negativne slike u medijima (Marshall, 2010). Za razliku od Toyote koja nije iskoristila prednosti novih Web tehnologija, američkog predsednika Baraka Obamu nazivaju i prvim Internet predsednikom. Koristio je Web 2.0 tehnologije za komunikaciju sa potencijalnim biračima tokom predsedničke kampanje. Na YouTube postavljani su njegovi govori u celosti (za razliku od dotadašnjih isečaka u TV vestima), a socijalne mreže omogućile su nov način komunikacije s potencijalnim glasačima (Barak Obama ima preko 10 miliona prijatelja na Facebook-u i MySpace-u, kao i preko 3,5 miliona korisnika koji ga prate na Twiteru) (Nations).http://toyotaconversations.com/http://www.facebook.com/toyotahttp://www.youtube.com/user/BarackObamadotcomhttp://www.facebook.com/barackobamahttp://www.myspace.com/barackobamahttp://twitter.com/BARACKOBAMA
  • AJAX (Asynchronous JavaScript and XML), koji omogućava kreiranje interaktivnih Web aplikacija i pruža funkcionalnosti i izgled desktop aplikacije u okviru web browseraAPI mogu implementirati aplikacije, biblioteke i operativi sistemi kako bi odredili svoje rečnike i konvencije za poziv svojih funkcija, čime omogućavaju pristup svojim servisima (API).Representational State Transfer (REST) je stil softverske arhitekture za distribuirane hipermedia sisteme, kao što je WWW. REST arhitektura se sastoji od klijenata i servera. Klijent inicira zahtev serveru, server procesira zahtev i vraća adekvatan odgovor (Representational State Trensfer).RSS (Really Simple Syndication) jeste skup web feed formata koji se koriste za izdavanje u standardizovanom formatu često menjanog sadržaja, kao što su blogovi, naslovi vesti, audio i video sadržaji. RSS dokument se naziva feed, web feed ili kanal i uključuje sažeti tekst i metapodatke, kao što su datum izdavanja i autorstvo (RSS).
  • InterfaceMain article: Gmail interfaceThe "Google+" Project's Preview UI(available in the "themes" menu)The Gmail interface differs from other Webmail systems with its focus on search and its "conversation view" of email, grouping several replies onto a single page. Gmail's user-experience designer, Kevin Fox, intended users to feel as if they were always on one page and just changing things on that page, rather than having to navigate to other placesBefore its acquisition by Google, the gmail.com domain name was used by a free email service offered by Garfield.com, online home of the comic strip Garfield. After moving to a different domain, that service has since been discontinued.[43]As of June 22, 2005, Gmail's canonical URI changed from http://gmail.google.com/gmail/ to http://mail.google.com/mail/.[44] As of November 2010, those who typed in the former URI were redirected to the latter.Google Maps (formerly Google Local) is a web mapping service application and technology provided by Google, free (for non-commercial use), that powers many map-based services, including the Google Maps website, Google Ride Finder, Google Transit,[1] and maps embedded on third-party websites via the Google Maps API.[2] It offers street maps, a route planner for traveling by foot, car, bike (beta) or public transport and an urban business locator for numerous countries around the world. Google Maps satellite images are not updated in real time; they are several months or years old.Google Maps uses a close variant of the Mercator projection, so it cannot show areas around the poles. A related product is Google Earth, a stand-alone program which offers more globe-viewing features, including showing polar areas.A Brief History of AjaxNew technology quickly becomes so pervasive that it’s sometimes hard to remember what things were like before it. The latest example of this in miniature is the technique known as Ajax, which has become so widespread that it’s often thought that the technique has been around practically forever.In some ways it has. During the first big stretch of browser innovation, Netscape added a feature known as LiveScript, which allowed people to put small scripts in web pages so that they could continue to do things after you’d downloaded them. One early example was the Netscape form system, which would tell you if you’d entered an invalid value for a field as soon as you entered it, instead of after you tried to submit the form to the server.LiveScript became JavaScript and grew more powerful, leading to a technique known as Dynamic HTML, which was typically used to make things fly around the screen and change around in response to user input. Doing anything serious with Dynamic HTML was painful, however, because all the major browsers implemented its pieces slightly differently.Shortly before web development died out, in early versions of Mozilla, Netscape showed a new kind of technique. I don’t think it ever had a name, but we could call it Dynamic XML. The most vivid example I remember seeing was a mockup of an Amazon.com search result. The webpage looked just like a typical Amazon.com search result page, but instead of being written in HTML it was a piece of XML data which was then rendered for the user by a piece of JavaScript. The cool part was that this meant the rendering could be changed on the fly — there were a bunch of buttons that would allow you to sort the books in different ways and have them display using different schemes.Shortly thereafter the bubble burst and web development crashed. Not, however, before Microsoft added a little-known function call named XMLHttpRequest to IE5. Mozilla quickly followed suit and, while nobody I know used it, the function stayed there, just waiting to be taken advantage of.XMLHttpRequest allowed the JavaScript inside web pages to do something they could never really do before: get more data.1 Before, all the data either had to be sent with the web page. If you wanted more data or new data, you had to grab another web page. The JavaScript inside web pages couldn’t talk to the outside world. XMLHttpRequest changed that, allowing web pages to get more data from the server whenever they pleased.Google was apparently the first to realize what a sea change this was. With Gmail and Google Maps, they built applications that took advantage of this to provide a user interface that was much more like a web application. (The startup Oddpost, bought by Yahoo, actually predated this but their software was for-pay and so they didn’t receive as much attention.)With Gmail, for example, the application is continually asking the server if there’s new email. If there is, then it live updates the page, it doesn’t make you download a new one. And Google Maps lets you drag a map around and, as you do so, automatically downloads the parts of it you want to look at inline, without making you wait for a whole new page to download.Jesse James Garrett of Adaptive Path described this new tactic as Ajax (Asynchronous Javascript And XML) in an essay  and the term immediately took off. Everyone began using the technique in their own software and JavaScript toolkits sprung up to make doing so even easier.And the rest is future history.Both systems were relatively ill-supported by browsers in my experience. They were, after all, hacks. So while they both seemed extremely cool (KnowNow, in particular, had an awesome demo that allowed for a WYSIWYG SubEthaEdit-style live collaboration session in a browser), they never really took off.Now apparently there is another technique, which I was unaware of, that involved changing the URL of an iframe  to load new JavaScript. I’m not sure why this technique didn’t quite take off. While Google Maps apparently used it (and Oddpost probably did as well), I don’t know of any other major users
  • http://www.json.org/JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.JSON is built on two structures:A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.These are universal data structures. Virtually all modern programming languages support them in one form or another. It makes sense that a data format that is interchangeable with programming languages also be based on these structures.
  • Made on the Web, designed by usBy Dion Hinchcliffe | August 17, 2010, 6:19am PDTSummary: With a new survey showing that the majority of people on the Web are willing to co-create, crowdsourcing is looking like a repeatable, reliable way to outsource work and partner with online communities to create concrete results. Crowdsourcing has become increasingly attractive to small businesses as well as enterprises. Yet this category remains stubbornly in the experimentation phase even as some firms start racking up significant wins. Here are the pros and cons of this approach as well as how companies get started in what is shaping up to be one of the most significant new approaches to global business in this century.With crowdsourcing, how we produce ideas and work in the 21st century will be very different indeed.The notion of the Internet as a medium for self-expression and creativity is certainly not a new one. The Web 2.0 era has repeatedly underscored the point that when an online service successfully taps into the wellspring of global innovation that is us, the outcomes can be historic. Now traditional businesses are finding ways to take advantage of the same power.It’s no accident that the top five sites on the Web as of this writing are Google, Yahoo, Facebook, YouTube, and Wikipedia. The vast majority of all content on the latter three services is entirely user generated, while arguably the first, Google, is merely a reflection of our own content after passing through their patented algorithm. The implication is that what we collectively create has become the most significant and popular part of the Web.In recent years a new and slightly different take on user generated content has emerged. The growing popularity and professionalism of crowdsourcing has led to a number of online services who specialize in meeting the specific needs of businesses by enlisting their communities to tackle anyone’s challenges, usually in exchange for some form of compensation. The advantages are clear: Very low cost access to enthusiasts and experts in a given subject, broader input of new ideas, faster response to needs, and so on. The disadvantages are less so, but usually boil down to lack of control, occasionally in unsatisfactory results, and finally there are concerns about security and privacy.But the biggest question has been whether sourcing to the Web is a repeatable and reliable way to run a business. Can you really count on a largely unknown group of people on the Web to predictably provide you with the results you need day-to-day? While the fast growing number of crowdsourcing services (I count several new ones a week now) speaks volumes, another interesting new data point surfaced this month that shows that the wellspring might indeed be as large as required for widespread adoption. A new Forrester report issued last week says that despite half of all firms yet to use social media to drive product design, creation, or strategy, approximately “sixty-one percent of all US online adults are willing co-creators” if they are so tapped.A majority are willing to co-createSource: North American Technographics Consumer Technology Online Benchmark Recontact Survey, Q2 2010.This is a significant piece of data and if borne out shows that how we produce ideas and work in the 21st century will be very different indeed. For its part, The New York Times has been running a long series of pieces on crowdsourcing over the last year, most recently this small business guide, where a lot of the focus has been in terms of the growth and richness in co-created products and services for SMBs. This is an interesting departure from much of the use of Web 2.0 in business, which has often been dominated in recent years by the enterprise space. Large firms have often had the resources and spare capital to invest significantly in capabilities such asEnterprise 2.0 and Social CRM.Not that the Global 2000 has eschewed crowdsourcing either. Many notable examples exist here too, particularly with well-known enterprise crowdsourcing services such as Innocentive andIdeascale. Yet an informal survey of some of the newer crowdsourcing services such as Tongaland Whinot shows that small businesses are the major customer base, if not the actual target. Almost certainly it’s because crowdsourcing offers by far the least expensive route to conduct the activities that small business are often too financially strapped to do any other way. This is part of the dramatic collapsing of traditional cost structures for many classical business activities that seems to be a hallmark of Social Business in general. For some, this will provide a major competitive advantage and associated gains, at least as long as the leaders in their sector are still using the old methods. For others, particularly late arrivals, it will no doubt become a required competency but perhaps not offer up the same disruptive potential.The real insight into the data above, however, is that there indeed seems to be a vast and ready reservoir of talent that can be tapped into inexpensively by most businesses. When people around the world are more than willing to tell you what they want, why wouldn’t businesses listen? I’ve long pointed out the not-invented-here syndrome that afflicts companies when they get a certain size, and it’s certainly one of the major reasons that about half of businesses ignore significantly improved ways of operating. Yet, much of it is accidental and due to the time it takes to absorb and activate on fundamentally new ideas. Unfortunately, the rest of the world hasn’t been waiting and as I pointed out at the top of the post, many of the global winners online are largely built by their customers.For the rest of us, how might we best go about applying crowdsourcing to our lines of business with minimal disruption and the best benefit? Like, so many things, adoption is a matter of looking at the opportunity with an open perspective:How to experiment with crowdsourcingFor those looking at exploring this new approach to sourcing work, here are some common sense steps:Look for areas where traditional methods aren’t working in your business today. This might be in customer support, where the crowdsourcing aspect of Social CRM can offer significant help, or it could be in product design, testing, or another place entirely where not enough people with the right ideas or skills can be applied effectively. Good signs are when some important business function is either far too expensive, doesn’t scale, or is producing poor quality results (typically due to too few usable inputs).Identify candidate crowdsourcing services. It often helps to identify more than one, so that they can be run off against each other in parallel to get the best results. I’ll be creating a detailed round-up of crowdsourcing services shortly, in the meantime use Ross Dawson’s useful Crowdsourcing Landscape as a good starting point for your list. There are new services emerging all the time, however, so it’s worth taking a time to explore and find applicable candidates.Engage in pilots with your candidate services. One of the good things about crowdsourcing is that it’s typically very inexpensive compared to alternatives like subcontractors or outsourcing. Consequently, you can usually try out all available services until you find one that works best for you. Keep an eye out for the ones that attract top performers (these services are often more expensive than the others, but again, sometimes not by much) and whose community is aligned well with your primary business if possible.Move successful pilots into operation while cultivating new contributors. Unlike traditional business relationships, which are usually far more stable and change less frequently, the world of open business models is much more dynamic. Be prepared to continue discovering and evaluating new communities to tap into and participate with. The crowdsourcing journey is one fraught with change, challenges, and opportunities and there is no final destination. Communities ebb and flow and change is the norm in this new discipline.In the future, many of the products and services that we will use could be labeled “Made on the Web, designed by us“. But for now, this nascent and emerging field is offering early adopters a powerful and easy-to-use new alternative for building scalable, fast-growing businesses for a fraction of the cost of the slow and inefficient methods of yesteryear. For those that can clearly see their future, anyway.As this field continues to grow and expand, I’ll be exploring more of the stories of crowdsourcing and other open business models in coming months. Please share your experiences with sourcing to the Web below in Talkback.http://www.zdnet.com/blog/hinchcliffe/made-on-the-web-designed-by-us/1394
  • The Turk, also known as the Mechanical Turk or Automaton Chess Player (German: Schachtürke, "chess Turk"' Hungarian: A Török), was a fakechess-playing machine constructed in the late 18th century. From 1770 until its destruction by fire in 1854, it was exhibited by various owners as anautomaton, though it was exposed in the early 1820s as an elaborate hoax.[1] Constructed and unveiled in 1770 by Wolfgang von Kempelen (1734–1804) to impress the Empress Maria Theresa, the mechanism appeared to be able to play a strong game of chess against a human opponent, as well as perform the knight's tour, a puzzle that requires the player to move a knight to occupy every square of a chessboard exactly once.The Amazon Mechanical Turk (MTurk) is a crowdsourcing Internet marketplace that enables computer programmers (known as Requesters) to co-ordinate the use of human intelligence to perform tasks that computers are unable to do yet. It is one of the suites of Amazon Web Services. The Requesters are able to post tasks known as HITs (Human Intelligence Tasks), such as choosing the best among several photographs of a store-front, writing product descriptions, or identifying performers on music CDs. Workers (called Providers in Mechanical Turk's Terms of Service) can then browse among existing tasks and complete them for a monetary payment set by the Requester. To place HITs, the requesting programs use an open Application Programming Interface, or the more limited MTurk Requester site.[1] Requestors are restricted to US-based entities
  • Wikis are changing the way we work together. Last month, I attended the "Introduction to Wikis" course that was offered through the National Institutes of Health’s (NIH) Center for Information Technology (CIT).   This course is just one example of the widespread interest in wikis across HHS.In layman’s terms, a wiki  is an online document that can be accessed by multiple people and edited directly. They provide users with the most recent version of the document up front, while still allowing access to older versions and edit histories. For many projects, wikis are more efficient than collaborating through email. Wikis can be used to promote creativity and transparency while encouraging participation and information sharing.At NIH, Wikis are advancing communication, collaboration and even facilitating scientific advances. From authoring articles in their specific genre to editing entries for accuracy or sharing laboratory expertise, NIH scientists are using wikis for a wide range of activities. Wikis are also being used to communicate static content to communities, as well as day-to-day communication about team activities. But…like most technologies, you can’t roll out a Wiki and expect people to use it and magically create collaboration.  There must be a culture in which employees trust one another, adequate training is provided, and policies and procedures are necessary to bring the vision of Wikis to full fruition.One of the biggest challenges is getting user buy-in. The value of creating a workspace that is supposed to cut across silos, geography, and time-zone differences is lost if everyone who needs to collaborate doesn’t learn to use the wiki and develop the habit of using it. Some people may hesitate to use the wiki because they are uncomfortable with the transparency of the work process or believe they won’t get credit for their work and ideas due to the more collaborative work process.  Additionally, procedures and policies need to be in place about how, when, and who can use the wiki to get any value out of it. Another consideration is that it can be difficult for a really large community to effectively edit one document, even on a wiki, but publishing a document for comment and asking for feedback is greatly facilitated.  CharacteristicsWard Cunningham and co-author Bo Leuf, in their book The Wiki Way: Quick Collaboration on the Web, described the essence of the Wiki concept as follows:A wiki invites all users to edit any page or to create new pages within the wiki Web site, using only a plain-vanilla Web browser without any extra add-ons.Wiki promotes meaningful topic associations between different pages by making page link creation almost intuitively easy and showing whether an intended target page exists or not.A wiki is not a carefully crafted site for casual visitors. Instead, it seeks to involve the visitor in an ongoing process of creation and collaboration that constantly changes the Web site landscape.A wiki enables communities to write documents collaboratively, using a simple markup language and a web browser. A single page in a wiki website is referred to as a "wiki page", while the entire collection of pages, which are usually well interconnected by hyperlinks, is "the wiki". A wiki is essentially a database for creating, browsing, and searching through information. A wiki allows for non-linear, evolving, complex and networked text, argument and interaction.[12]A defining characteristic of wiki technology is the ease with which pages can be created and updated. Generally, there is no review before modifications are accepted. Many wikis are open to alteration by the general public without requiring them to register user accounts. Many edits can be made in real-time and appear almost instantly online. This can facilitate abuse of the system. Private wiki servers require user authentication to edit pages, and sometimes even to read them.Maged N. Kamel Boulos, Cito Maramba and Steve Wheeler write that it is the "openness of wikis that gives rise to the concept of 'Darwikinism', which is a concept that describes the 'socially Darwinianprocess' that wiki pages are subject to. Basically, because of the openness of wikis and the rapidity with which wiki pages can be edited, the pages undergo a natural selection process like that which nature subjects to living organisms. 'Unfit' sentences and sections are ruthlessly culled, edited and replaced if they are not considered 'fit', which hopefully results in the evolution of a higher quality and more relevant page. Whilst such openness may invite 'vandalism' and the posting of untrue information, this same openness also makes it possible to rapidly correct or restore a 'quality' wiki page."[13]Editing wiki pagesThere are many different ways in which wikis have users edit the content. Ordinarily, the structure and formatting of wiki pages are specified with a simplified markup language, sometimes known aswikitext (for example, starting a line of text with an asterisk often sets up a bulleted list). The style and syntax of wikitexts can vary greatly among wiki implementations, some of which also allow HTMLtags. Designers of wikis often take this approach because HTML, with its many cryptic tags, is not very legible, making it hard to edit. Wikis therefore favour plain-text editing, with fewer and simpler conventions than HTML, for indicating style and structure. Although limiting access to HTML and Cascading Style Sheets (CSS) of wikis limits user ability to alter the structure and formatting of wiki content, there are some benefits. Limited access to CSS promotes consistency in the look and feel, and having JavaScript disabled prevents a user from implementing code that may limit access for other users.MediaWiki syntaxEquivalent HTMLRendered output"Take some more [[tea]]," the March Hare said to Alice, very earnestly."I've had '''nothing''' yet," Alice replied in an offended tone, "so I can't take more.""You mean you can't take ''less''?" said the Hatter. "It's very easy to take ''more'' than nothing."<p>"Take some more <a href="/wiki/Tea" title="Tea">tea</a>," the March Hare said to Alice, very earnestly.</p><p>"I've had <b>nothing</b> yet," Alice replied in an offended tone, "so I can't take more."</p><p>"You mean you can't take <i>less</i>?" said the Hatter. "It's very easy to take <i>more</i> than nothing."</p>"Take some more tea," the March Hare said to Alice, very earnestly."I've had nothing yet," Alice replied in an offended tone, "so I can't take more.""You mean you can't take less?" said the Hatter. "It's very easy to take more than nothing."Increasingly, wikis are making WYSIWYG editing available to users, usually by means of JavaScript or an ActiveX control that translates graphically entered formatting instructions into the corresponding HTML tags or wikitext. In those implementations, the markup of a newly edited, marked-up version of the page is generated and submitted to the server transparently, shielding the user from this technical detail. However, WYSIWYG controls do not always provide all of the features available in wikitext, and some users prefer not to use a WYSIWYG editor. Hence, many of these sites offer some means to edit the wikitext directly.Most wikis keep a record of changes made to wiki pages; often, every version of the page is stored. This means that authors can revert to an older version of the page, should it be necessary because a mistake has been made or the page has been vandalized. Many implementations, like MediaWiki, allow users to supply an edit summary when they edit a page; this is a short piece of text summarising the changes. It is not inserted into the article, but is stored along with that revision of the page, allowing users to explain what has been done and why; this is similar to a log message when making changes to a revision-control system.NavigationWithin the text of most pages there are usually a large number of hypertext links to other pages. This form of non-linear navigation is more "native" to wiki than structured/formalized navigation schemes. That said, users can also create any number of index or table-of-contents pages, with hierarchical categorization or whatever form of organization they like. These may be challenging to maintain by hand, as multiple authors create and delete pages in an ad hoc manner. Wikis generally provide one or more ways to categorize or tag pages to support the maintenance of such index pages.Most wikis have a backlink feature, which displays all pages that link to a given page.It is typical in a wiki to create links to pages that do not yet exist, as a way to invite others to share what they know about a subject new to the wiki.Linking and creating pagesLinks are created using a specific syntax, the so-called "link pattern" (also see CURIE). Originally, most wikis used CamelCase to name pages and create links. These are produced by capitalizing words in a phrase and removing the spaces between them (the word "CamelCase" is itself an example). While CamelCase makes linking very easy, it also leads to links which are written in a form that deviates from the standard spelling. To link to a page with a single-word title, one must abnormally capitalize one of the letters in the word (e.g. "WiKi" instead of "Wiki"). CamelCase-based wikis are instantly recognizable because they have many links with names such as "TableOfContents" and "BeginnerQuestions." It is possible for a wiki to render the visible anchor for such links "pretty" by reinserting spaces, and possibly also reverting to lower case. However, this reprocessing of the link to improve the readability of the anchor is limited by the loss of capitalization information caused by CamelCase reversal. For example, "RichardWagner" should be rendered as "Richard Wagner," whereas "PopularMusic" should be rendered as "popular music". There is no easy way to determine which capital letters should remain capitalized. As a result, many wikis now have "free linking" using brackets, and some disable CamelCase by default.
  • 2.2.1.3 Korisnički generisan sadržajNa sajtovima socijalnih mreža korisnici su odgovorni za dodavanje i ažuriranje informacija koje su dostupne svima. Ove virtuelne zajednice omogućavaju njenim korisnicima da razmenjuju mišljenja, diskutuju, sarađuju i raspravljaju na teme vezane za njihova zajednička interesovanja. Korisnički generisan sadržaj i razmena poslatih informacija glavna su pokretačka snaga kod formiranja i održavanja on-line zajednica (Bernal, 2009).Korisnički generisan sadržaj jeste sadržaj web sajta koji kreiraju i ažuriraju korisnici, a ne konvencionalni mediji. Tipičan primer bio bi Flickr, on-line zajednica za razmenu fotografija, na kojoj se može naći preko 37 miliona slika, i gde 1,2 miliona korisnika postavlja na sajt i do 200 000 slika dnevno. Korisnički generisan sadržaj promenio je način korišćenja Interneta, pa se umesto jednosmernog prenosa medijskog sadržaja od strane određenog broja kompanija sadržaji kreiraju od strane korisnika i razmenjuju između njih (Seongwoon, et al., 2010).Wiki predstavljaju primer web sajtova čiji je celokupan sadržaj korisnički generisan. Članci u wiki-u kreiraju se učešćem većeg broja ljudi, korišćenjem jednostavnog markup jezika u okviru web browsera. Korisnički generisan sadržaj i otvoren pristup uređivanju predstavljaju najznačajniju prednost wiki-ja, ali i osnovni nedostatak. Ostali nedostaci su nemogućnost provere validnosti sadržaja i nedostatak kontrole. Bilo je slučajeva, kao u incidentu Seigenthaler (Seigenthaler, 2005), kada se skepticizam vezan za tačnost sadržaja na Wikipediji pokazao opravdanim. Bez obzira na sve nedostatke, ovaj otvoreni model u praksi dobro funkcioniše. Mnogi analitičari, a i korisnici, Wikipediju smatraju ozbiljnim konkurentom enciklopediji Britannica (Wikipedia survives research test , 2005).Primer zajedničkog procesa kreiranja sadržaja je i http://delicious.com, sajt za on-line skladištenje, dodavanje i tagovanje bookmarka. Nedostatak bookmark-a koji se nalaze na korisnikovom računaru je da se ne mogu koristiti na drugoj lokaciji ili razmenjivati s drugim korisnicima. Sajtovi za on-line bookmarking, kao što je delicious, omogućavaju skladištenje i organizovanje bookmarka na centralnom mestu na Web-u, tako da korisnici mogu da kreiraju on-line socijalne mreže koje ih povezuju s prijateljima, porodicom i kolegama, ili s potpunim strancima sa kojima dele slična interesovanja. Upotrebom međusobnih veza u ovoj (ali i bilo kojoj drugoj) socijalnoj mreži korisnici mogu da praćenjem ponašanja ostalih korisnika dolaze do novih informacija, lakše i efikasnije nego npr. korišćenjem e-maila.Kroz analizu Web sajta delicous.com vidi se da su korisnici socijalnih mreža u centru pažnje. Oni su glavni učesnici u procesu kreiranja i razmene sadržaja. Činjenica da su prijava i korišćenje ovih sajtova besplatni i jednostavni je ono što privlači veliki broj korisnika na sajtove socijalnih mreža. http://www.flickr.com/
  • 2.2.3 Social Semantic WebSocijalne mreže ne predstavljaju radikalnu promenu tehnologije, same po sebi, već i promenu viđenja interakcije čoveka i mašine, gde se korisnik stavlja u centar sistema i daje mu se kontrola nad interakcijom s ostalim korisnicima i s računarom. Međutim, Web se mora učiniti „inteligentnijim“, kako bi na pravi način podržao ovakve društvene interakcije (Sfakianakis, 2009). Informacije koje postoje na Web 2.0 sajtovima zahtevaju zajedničko značenje. Ovo bi omogućilo povezivanje različitih komponenti i izvora podataka, koji bi se na ovaj način lakše integrisali i organizovali, kao odgovor na interakciju korisnika (Greaves, 2007).Tehnologije Semantičkog Weba omogućavaju dodavanje strukturiranih podataka, koji su vezani za korisnički kreiran sadržaj. Na ovaj način postoji mogućnost za izvođenje različitih izračunavanja i zaključivanja nad prikupljenim podacima, čime se omogućava otkrivanje znanja u vidu odgovora, otkrića ili drugih rezultata koji se ne mogu naći u postojećim sadržajima (Gruber, 2007). Karakteristike Web 2.0 koje treba koristiti da bi se dodalo semantičko značenje sadržajima jesu:preporuke - funkcionalnosti koje već postoje na Web 2.0 sajtovima, kao što je Amazon (kupci ovog proizvoda, kupili su i...), ili http://delicious.com (klasifikacija on-line sadržaja na osnovu tagova korisnika),i personalizacija, koja se koristi kod prilagođenih feedova (Kiss, 2008). Potreba za dodavanjem više značenja sadržajima socijalnih mreža predstavlja deo inicijativa koje se sprovode kod rada na standardima vezanim za Semantički Web. Sadržaji socijalnih mreža koji poseduju semantičko značenje mogli bi se lakše međusobno povezivati i pretraživati. Dobrobit od korišćenja današnjih socijalnih mreža umnogome bi se povećala, pomerile bi se granice korišćenja sajtova socijalnih mreža i otklonila ograničenja kod pretraživanja Weba po ključnim rečima (Sfakianakis, 2009).Već pomenuti otvoreni i povezani podaci koji predstavljaju osnovu Semantičkog Weba predstavljaju glavno ograničenje pri realizaciji social Semantic Weba. Dva glavna nedostatka socijalnih mreža su:zatvorenost sadržaja u okviru sajta socijalne mreže, inemogućnost razmene podataka o korisniku između različitih socijalnih mreža.Tehnologije Semantičkog Weba nude rešenje koje bi značajno uticalo na sadržaje kreirane tokom on-line aktivnosti korisnika socijalnih mreža. Korisnici bi kreiranjem, tagovanjem i interakcijom s objektima na Webu, pravili semantički bogate sadržaje, na način koji ne zavisi od konkretnog Web sajta. Na taj način nastala bi socijalna okruženja, koja obuhvataju lične podatke o korisnicima i podatke o socijalnim mrežama kojima pripadaju. Ovakva socijalna okruženja bi mogla da se koriste pri svim on-line aktivnostima korisnika. Rezultat pridruživanja semantičkog značenja podacima na Webu i profilima korisnika bila bi mreža ljudi ili People Web. Značajne karakteristike korisnika i objekata postaju globalno dostupne čime se menjaju karakteristike online zajednica i unapređuje pretraga na Webu (Ramakrishnan & Tomkins, 2007).Jedan od primera dodate vrednosti postojećim sadržajima Web 2.0 sajtova predstavljaju semantički Wiki-ji (Krötzsch, Völkel, Vrandečić, Haller, & Studer, 2007). Semantički Wiki koristi semantičke opise linkova i sadržaja na stranici, čime se postižu inteligentnija pretraga i navigacija. Mada se detalji mogu razlikovati od wikija do wikija u pozadini se obično nalazi model zasnovan na RDF-u i OWL-u, koji ih podržava, tako da sadržaj može biti izvezen u odgovarajući format Semantičkog Weba. DBPedia je interesantan primer pravog semantičkog Wiki-a, koji nudi sadržaj Wikipedije u formatu koji je mašinski čitljiv i ima mogućnost pretrage (Sfakianakis, 2009).Karakteristike socijalnih mreža i Web 2.0 jesu pomenuta kolektivna inteligencija (eng. collective inteligence) i mudrost gomile (eng. wisdom of the crowds). Međutim, izraz kolektivna inteligencija pre se može preformulisati u sakupljena inteligencija (eng. collected inteligence), jer tehnologije Web 2.0 omogućavaju velikom broju korisnika da dodaju i uređuju sadržaje na Webu, koji ostaju dostupni samo u okviru Web sajta gde su kreirani. Ovakvi podaci nisu otvoreni.Trenutno su aplikacije socijalnih mreža više usmerene na isporuku i upravljanje sadržajem i na društveno povezivanje, koje iz njih proističe, a ne na pružanje semantički bogatih opisa podataka. Postoje i popularne tehnologije kao što su mikroformati i „tagovanje/folksonomije“ koji se koriste za anotaciju i opis podataka. One daju utisak ad hoc i nestrukturiranog napora u odnosu na formalne Web ontologije i metadata opise Semantičkog Weba (Sfakianakis, 2009).Feed je enkapsuliran online sadržaj, kao što su vesti ili blog, na koje korisnik može da se pretplatihttp://www.w3.org/2005/Incubator/socialweb/http://dbpedia.org/About
  • 2.2.3 Social Semantic WebSocijalne mreže ne predstavljaju radikalnu promenu tehnologije, same po sebi, već i promenu viđenja interakcije čoveka i mašine, gde se korisnik stavlja u centar sistema i daje mu se kontrola nad interakcijom s ostalim korisnicima i s računarom. Međutim, Web se mora učiniti „inteligentnijim“, kako bi na pravi način podržao ovakve društvene interakcije (Sfakianakis, 2009). Informacije koje postoje na Web 2.0 sajtovima zahtevaju zajedničko značenje. Ovo bi omogućilo povezivanje različitih komponenti i izvora podataka, koji bi se na ovaj način lakše integrisali i organizovali, kao odgovor na interakciju korisnika (Greaves, 2007).Tehnologije Semantičkog Weba omogućavaju dodavanje strukturiranih podataka, koji su vezani za korisnički kreiran sadržaj. Na ovaj način postoji mogućnost za izvođenje različitih izračunavanja i zaključivanja nad prikupljenim podacima, čime se omogućava otkrivanje znanja u vidu odgovora, otkrića ili drugih rezultata koji se ne mogu naći u postojećim sadržajima (Gruber, 2007). Karakteristike Web 2.0 koje treba koristiti da bi se dodalo semantičko značenje sadržajima jesu:preporuke - funkcionalnosti koje već postoje na Web 2.0 sajtovima, kao što je Amazon (kupci ovog proizvoda, kupili su i...), ili http://delicious.com (klasifikacija on-line sadržaja na osnovu tagova korisnika),i personalizacija, koja se koristi kod prilagođenih feedova (Kiss, 2008). Potreba za dodavanjem više značenja sadržajima socijalnih mreža predstavlja deo inicijativa koje se sprovode kod rada na standardima vezanim za Semantički Web. Sadržaji socijalnih mreža koji poseduju semantičko značenje mogli bi se lakše međusobno povezivati i pretraživati. Dobrobit od korišćenja današnjih socijalnih mreža umnogome bi se povećala, pomerile bi se granice korišćenja sajtova socijalnih mreža i otklonila ograničenja kod pretraživanja Weba po ključnim rečima (Sfakianakis, 2009).Već pomenuti otvoreni i povezani podaci koji predstavljaju osnovu Semantičkog Weba predstavljaju glavno ograničenje pri realizaciji social Semantic Weba. Dva glavna nedostatka socijalnih mreža su:zatvorenost sadržaja u okviru sajta socijalne mreže, inemogućnost razmene podataka o korisniku između različitih socijalnih mreža.Tehnologije Semantičkog Weba nude rešenje koje bi značajno uticalo na sadržaje kreirane tokom on-line aktivnosti korisnika socijalnih mreža. Korisnici bi kreiranjem, tagovanjem i interakcijom s objektima na Webu, pravili semantički bogate sadržaje, na način koji ne zavisi od konkretnog Web sajta. Na taj način nastala bi socijalna okruženja, koja obuhvataju lične podatke o korisnicima i podatke o socijalnim mrežama kojima pripadaju. Ovakva socijalna okruženja bi mogla da se koriste pri svim on-line aktivnostima korisnika. Rezultat pridruživanja semantičkog značenja podacima na Webu i profilima korisnika bila bi mreža ljudi ili People Web. Značajne karakteristike korisnika i objekata postaju globalno dostupne čime se menjaju karakteristike online zajednica i unapređuje pretraga na Webu (Ramakrishnan & Tomkins, 2007).Jedan od primera dodate vrednosti postojećim sadržajima Web 2.0 sajtova predstavljaju semantički Wiki-ji (Krötzsch, Völkel, Vrandečić, Haller, & Studer, 2007). Semantički Wiki koristi semantičke opise linkova i sadržaja na stranici, čime se postižu inteligentnija pretraga i navigacija. Mada se detalji mogu razlikovati od wikija do wikija u pozadini se obično nalazi model zasnovan na RDF-u i OWL-u, koji ih podržava, tako da sadržaj može biti izvezen u odgovarajući format Semantičkog Weba. DBPedia je interesantan primer pravog semantičkog Wiki-a, koji nudi sadržaj Wikipedije u formatu koji je mašinski čitljiv i ima mogućnost pretrage (Sfakianakis, 2009).Karakteristike socijalnih mreža i Web 2.0 jesu pomenuta kolektivna inteligencija (eng. collective inteligence) i mudrost gomile (eng. wisdom of the crowds). Međutim, izraz kolektivna inteligencija pre se može preformulisati u sakupljena inteligencija (eng. collected inteligence), jer tehnologije Web 2.0 omogućavaju velikom broju korisnika da dodaju i uređuju sadržaje na Webu, koji ostaju dostupni samo u okviru Web sajta gde su kreirani. Ovakvi podaci nisu otvoreni.Trenutno su aplikacije socijalnih mreža više usmerene na isporuku i upravljanje sadržajem i na društveno povezivanje, koje iz njih proističe, a ne na pružanje semantički bogatih opisa podataka. Postoje i popularne tehnologije kao što su mikroformati i „tagovanje/folksonomije“ koji se koriste za anotaciju i opis podataka. One daju utisak ad hoc i nestrukturiranog napora u odnosu na formalne Web ontologije i metadata opise Semantičkog Weba (Sfakianakis, 2009).Feed je enkapsuliran online sadržaj, kao što su vesti ili blog, na koje korisnik može da se pretplatihttp://www.w3.org/2005/Incubator/socialweb/http://dbpedia.org/About
  • Socijalne mreže su se promenile i u pogledu broja svojih korisnika i u pogledu strukture, tako da sve veći broj ljudi svakodnevno koristi servise socijanih mreža. Neki od identifikovanih problema su:Fragmentacija razgovora, koja se prepoznaje kod blogova i activity stream-ova koji potiču sa više socijalnih mreža. Ovo se može rešiti kreiranjem centralnog kolaborativnog sadržaja, kao što Google pokušava sa svojim Wave formatom.Razlike između stare i nove generacije socijalnih medija. Novije platforme socijalnog weba su veoma centralizovane, posebno velike socijalne mreže, za razliku od wiki-a i blogova, koji su dosta distribuirani. Takođe, postoji razlika i u načinu na koji informacije teku od ovih servisa ka Webu.Nedostatak kontrole identiteta, kontakata i podataka vezanih za korisnika predstavlja ono o čemu se često debatuje na Webu, i još uvek nije nađeno adekvatno rešenje. S jedne strane veoma je naporno prebaciti podatke vezane za korisnika iz jedne socijalne mreže u drugu, a s druge strane u okviru jedne socijalne mreže sve je lakše koristiti ove podatke. Zasad je jedino rešenje korišćenje OAuth standarda (koji usvaja sve više online socijalnih mreža) ili korišćenje otvorenih API-a.Potreba za boljim socijalnim webom na mobilnim uređajima.Loša integracija između socijalnih mreža i servisa za pozicioniranje.Poteškoće u istovremenom učestvovanju u socijalnim aktivnostima na više kanala.Savladati i dobiti dodatne vrednost korišćenjem sve veće količine informacija sa socijalnih mreža.Standardi i tehnologije koji bi mogli da reše navedene probleme su:Otvoreni activity stream jeste centralni artefakt socijalnih medija. To su: Facebook-ov news feed, Twitter stream i lista u obrnutom hronološkom redosledu unosa u blog ili modifikacije wiki-a. Problem je što postoji previše tokova aktivnosti koje prosečan korisnik prati, koji su distribuirani, i naporni za interakciju i opažanje. Oni se takođe nalaze u više različitih formata za sindikaciju, kao što su RSS i Atom, ili u nekom specifičnom formatu. Može se navesti nekoliko primera standarda za tokove socijalnih aktivnosti i agregaciju ovih tokova. Standard koji su već prihvatili Facebook, MySpace, WindowsLive i Opera, kao i Google i Yahoo! je activity strea.ms. Nakon što content provajderi uspostave zajedničke standard, i oni zvanično postanu dostupni u njihovim alatima i platformama, activity streams biće lakše dostupni i za pretraživanje i za slanje sadržaja na socijalnu mrežu, korišćenjem dobro definisanih API-a. Na ovaj način aplikacijama će se omogućiti jedinstven način korišćenja activity stream-ova, sa više različitih kanala socijalnih mreža. Agregatori tokova aktivnosti, kao što su FriendFeed ili socijalni dashboard Tweetdeck moći će da ponude jedinstveni front-end, koji bi trebalo da smanji razlike pri korišćenju sve većeg broja kanala socijalnih mreža. Ovo može da reši i neke razlike između starijih i novijih socijalnih mreža. Ovaj standard, međutim, neće rešiti zatvorenost samih socijalnih aplikacija, jer standardizovanje formata socijalnih medija verovatno neće stvoriti jedinstveni način korišćenja popularnih formata aplikacija socijalnih mreža, kao što postoji između aplikacija Facebooka i Open Social.Prenosivi identitet, kontakti i podaci. Posedovanje jednog izabranog identiteta predstavlja sveti gral u krugovima socijalnog Weba, već nekoliko godina. Trenutno postoje rešenja koja se koriste za prijavu na Web sajt (ovde se misli na OpenID, Facebook Connect i Google Accounts) ili za direktno pristupanje podacima koje korisnik poseduje na sajtu (Oauth i Oauth WRAP). Za one koji kreiraju Web sajtove, ili koriste alate socijalnih mreža, ovi standardi predstavljaju znak kvaliteta. Socijalni graf takođe postaje lakši za prenošenje i uprkos do sada neuspešnim pokušajima da se to standardizuje. Postoje i noviji pokušaji, kao što je Portable Contacts, koji je podržan od strane Google-a i značajnih osoba iz Open Web zajednice, kao što su Chris Messina i Joseph Smarr. Treba pomenuti i WebFinger koji kreira metapodatke o ljudima, mnogo otvorenije i na način koji je lakši za održavanje. On se ne oslanja na stranice profila, servise socijalnih mreža i mikroformate koji se teško održavaju. Gravatar service omogućava korisnicima da definišu kako će izgledati njihova slika na profilu a koja se posle koristi na svim socijalnim mrežama sa kojima se korisnik povezuje. Tako se korisniku omogućava da na jednak način upravlja svojom vizuelnom predstavom na Webu.Bolje mogućnosti vezane za socijalne mreže i sistem za pozicioniranje u mobilnim uređajima. Socijalni API-i u mobilnim uređajima, koji „razumeju“ Web (i stvari, kao što su Portable Contacts) nisu se još uvek pojavili na iPhone-u ili Android SDK, osim u formatu lokalnog pristupa adresaru telefona. Zasad se pojedine aplikacije, koje „razumeju” socijalni Web, moraju povezati sa socijalnim medijima na Webu i pristupiti toku aktivnosti i kontaktima. Trenutno nedostaje kritična masa velikih proizvođača mobilnih uređaja koji će postavljati inteligentne socijalne opcije u same API-e uređaja. Ovo je razlog što većina aplikacija za mobilne uređaje nema neophodne mogućnosti za korišćenje alata i servisa socijalnih mreža. Neka od rešenja za mobilne uređaje su: Bu.mp za laku razmenu kontakata i Loopt za povezivanje sa prijateljima sa socijalnih mreža i prikaz različitih informacija iz okruženja. Postoje aplikacije koje prepoznaju lokaciju korisnika, kao što je Yahoo!- ov Fire Eagle i posebno Google Latitude, koje su postigle napredak u dodavanju servisa za pozicioniranje mobilnim aplikacijama. Treba pomenuti i PhoneGap, open source projekat koji nudi mogućnosti vezane za pozicioniranje, za iPhone, Android i Blackberry. Postoje i kritike vezane za ovakve aplikacije: smatra se da ograničavaju privatnost i često se porede sa Velikim bratom pisca Džordža Orvela.Bolji distribuirani modeli za socijalne mreže. Primeri protokola koji bi trebalo to da omoguće jesu:PubSubHubBub, koji se koristi za brže i efikasnije obaveštavanje o update-ima u socijalnim medijima, Salmon Protocol, koji ima za cilj definisanje standardnog protokola za komentare i napomene koji treba da se vrate svom originalnom izvoru, koji se na taj način menja. Ovim zatvorenim krugom obezbeđuje se više komentara, i Open Mashup Alliance, takođe može veoma lako da poveže mogućnosti socijalnih mreža, i na Webu i u preduzeću (Hinchcliffe, The social Web in 2010: The emerging standards and technologies to watch, 2010).https://wave.google.com/wave/?pli=1Otvoreni protokol koji omogućava sigurnu autorizaciju za API, korišćenjem jednostavnog i standardnog metoda, iz desktop ili Web aplikacije (OAuth).http://activitystrea.ms/http://openid.net/http://developers.facebook.com/connect.phphttps://www.google.com/accounts/ManageAccounthttp://wiki.oauth.net/OAuth-WRAP http://wiki.portablecontacts.net/http://factoryjoe.com/http://josephsmarr.com/http://hueniverse.com/webfinger/http://en.gravatar.com/http://bu.mp/http://www.loopt.com/http://fireeagle.yahoo.net/http://www.google.com/intl/en_us/latitude/intro.htmlhttp://www.phonegap.com/http://pubsubhubbub.googlecode.com/svn/trunk/pubsubhubbub-core-0.3.htmlhttp://pubsubhubbub.googlecode.com/svn/trunk/pubsubhubbub-core-0.3.htmlhttp://www.openmashup.org/
  • 2.4.1Agregacija socijalnih mreža (eng. social network aggregation) jeste proces prikupljanja sadržaja nastalog na više socijalnih mreža. Ovaj zadatak obično izvodi social network aggregator. On informacije postavlja na jedinstvenu lokaciju ili pomaže korisniku da konsoliduje višestruke profile socijalnih mreža u jedan profil (Social network aggregation). Social network aggregators olakšavaju upravljanje sadržajima nastalim na više socijalnih mreža kojima pripada jedan korisnik. Međutim, oni ne rešavaju ključni problem vezan za socijalne mreže a to je nemogućnost slobodne razmene sadržaja između različitih socijalnih mreža. Problem je u nedostatku otvorenih i/ili međusobno povezanih podataka, jer svaka socijalna mreža koristi sopstveni format podataka. Zbog toga se socijalne mreže često upoređuju s usamljenim ostrvima ili s ograđenim vrtovima.
  • Foursquare: Location-Aware Social Networking Done RightBy Frederic Lardinois / March 12, 2009 11:27 PM / 13 CommentsinShareShareJust in time for SXSW, Dennis Crowley, one of the original developers ofDodgeball, has released a new location-aware social app (iTunes link) for the iPhone: Foursquare. At its core, Foursquare is a location-based social network, but at the same time, it is also a nightlife game where users get extra points for being the first to visit a new place, going out every night of the week, and for adding new information about clubs and restaurants.Foursquare draws from a list of known restaurants, bars, and nightclubs. The iPhone app also integrates reviews from Yelp and Google Maps. The app currently features a database for 12 of the larger metropolitan areas in the U.S., including Boston, New York, San Francisco, LA, Chicago, Seattle, Portland, and Austin. We expect that Foursquare will add data  for more cities in the next few months. If a certain restaurant or bar is not in Foursquare's database, you can also add your own.CompetitionThe competitive elements are clearly stressed in this app, as you don't just get points for everything you do on the service (and which are tracked on a leaderboard), but you can also receive badges for special achievements. Some of those, like being the 'mayor' of a certain place (the player who has checked in from this location more often than anybody else), are only given to one user at a time, and we can see how this might spur some fierce local competitions.VerdictFoursquare is a very cool application, and unlike other location-aware apps, it adds a competitive element to the interaction, as it rewards you for checking in whenever you arrive at a new location. This solves a major problem that has held back a lot of similar apps like Loopt or Brightkite in the past: users simply didn't have enough of an incentive to open up the app and check in every time they arrived somewhere new. With Foursquare, checking in, however, becomes the thing to do.Overall, we can see how this app could become quite addictive and we wouldn't be surprised if it took Austin by storm this weekend.Geosocial Networking is a type of social networking in which geographic services and capabilities such as geocoding and geotagging are used to enable additional social dynamics.[1][2] User-submitted location data or geolocation techniques can allow social networks to connect and coordinate users with local people or events that match their interests. Geolocation on web-based social network services can be IP-based or use hotspot trilateration. For mobile social networks, texted location information or mobile phone tracking can enable location-based servicesto enrich social networking.
  • Foursquare: Location-Aware Social Networking Done RightBy Frederic Lardinois / March 12, 2009 11:27 PM / 13 CommentsinShareShareJust in time for SXSW, Dennis Crowley, one of the original developers ofDodgeball, has released a new location-aware social app (iTunes link) for the iPhone: Foursquare. At its core, Foursquare is a location-based social network, but at the same time, it is also a nightlife game where users get extra points for being the first to visit a new place, going out every night of the week, and for adding new information about clubs and restaurants.Foursquare draws from a list of known restaurants, bars, and nightclubs. The iPhone app also integrates reviews from Yelp and Google Maps. The app currently features a database for 12 of the larger metropolitan areas in the U.S., including Boston, New York, San Francisco, LA, Chicago, Seattle, Portland, and Austin. We expect that Foursquare will add data  for more cities in the next few months. If a certain restaurant or bar is not in Foursquare's database, you can also add your own.CompetitionThe competitive elements are clearly stressed in this app, as you don't just get points for everything you do on the service (and which are tracked on a leaderboard), but you can also receive badges for special achievements. Some of those, like being the 'mayor' of a certain place (the player who has checked in from this location more often than anybody else), are only given to one user at a time, and we can see how this might spur some fierce local competitions.VerdictFoursquare is a very cool application, and unlike other location-aware apps, it adds a competitive element to the interaction, as it rewards you for checking in whenever you arrive at a new location. This solves a major problem that has held back a lot of similar apps like Loopt or Brightkite in the past: users simply didn't have enough of an incentive to open up the app and check in every time they arrived somewhere new. With Foursquare, checking in, however, becomes the thing to do.Overall, we can see how this app could become quite addictive and we wouldn't be surprised if it took Austin by storm this weekend.Geosocial Networking is a type of social networking in which geographic services and capabilities such as geocoding and geotagging are used to enable additional social dynamics.[1][2] User-submitted location data or geolocation techniques can allow social networks to connect and coordinate users with local people or events that match their interests. Geolocation on web-based social network services can be IP-based or use hotspot trilateration. For mobile social networks, texted location information or mobile phone tracking can enable location-based servicesto enrich social networking.
  • Who’s Playing Social Games? [STATS]February 17, 2010 by Samuel Axon2539inShareShare on TumblremailshareAds by GoogleOnline Distance Learning - Education Master’s or PhD. 100% Online, 100% Supported!WaldenU.edu/EducationCasual game-maker Popcap Games (famous for Plants vs. Zombies and Bejeweled) commissioned Information Services Group to perform a survey of people who play social games online . It found that the average social gamer is a 43-year-old woman, despite long-standing social stereotypes about people who play games.Note that “social games” are web-based games like Mafia Warsand FarmVille that you play on social networks like Facebook, not Xbox 360 action games like Gears of War or Halo — young males still dominate that branch of the gaming hobby.The survey found that 55% of social gamers are female and 45% are male. Females are more avid gamers, too; 38% of females said they play multiple times a day, but just 29% males said the same. Women are more likely to play with people they know (68% vs. 56% for males), and men are more likely to play with strangers (41% vs. 33%) than women are.blikovao tržišteThere were more insights in the survey beyond gender. Facebook is the most popular destination for online games, with 83% of respondents saying they have played games there. Twenty-eight percent have purchased in-game currency with real-world money. The average gamer has played six social games, and more than 50% of gamers started playing a game because a friend recommended it or because they saw a friend playing it in a news feed or other social stream.One hundred million people are playing these games and about $1 billion in revenue is expected this year. It’s no wonder folks like Flickr’s Stewart Butterfield are diving into the market.An online game is a game played over some form of computer network. This almost always means the Internet or equivalent technology, but games have always used whatever technology was current: modems before the Internet, and hard wired terminals before modems. The expansion of online gaming has reflected the overall expansion of computer networks from small local networks to the Internet and the growth of Internet access itself. Online games can range from simple text based games to games incorporating complex graphics and virtual worlds populated by many players simultaneously. Many online games have associated online communities, making online games a form of social activity beyond single player games.
  • http://secondlife.com/A virtual world is an online community that takes the form of a computer-based simulated environment through which users can interact with one another and use and create objects.[1] The term has become largely synonymous with interactive 3D virtual environments, where the users take the form of avatars visible to others.[2] These avatars usually appear as textual, two-dimensional, or three-dimensional representations, although other forms are possible (auditory and touch sensations for example).[3][4] In general, virtual worlds allow for multiple users.The computer accesses a computer-simulated world and presents perceptual stimuli to the user, who in turn can manipulate elements of the modeled world and thus experience a degree oftelepresence.[5] Such modeled worlds and their rules may draw from the reality or fantasy worlds. Example rules are gravity, topography, locomotion, real-time actions, and communication. Communication between users can range from text, graphical icons, visual gesture, sound, and rarely, forms using touch, voice command, and balance senses.Massively multiplayer online games depict a wide range of worlds, including those based on fantasy, science fiction, the real world, super heroes, sports, horror, and historical milieus. The most common form of such games are fantasy worlds, whereas those based on the real world are relatively rare.[6] Many MMORPGs have real-time actions and communication. Players create a character who travels between buildings, towns, and worlds to carry out business or leisure activities. Communication is usually textual, but real-time voice communication is also possible. The form of communication used can substantially affect the experience of players in the game.[7]Virtual worlds are not limited to games but, depending on the degree of immediacy presented, can encompass computer conferencing and text based chatrooms. Sometimes, emoticons or 'smilies' are available, to show feeling or facial expression. Emoticons often have a keyboard shortcut.[8] Edward Castronova is an economist who has argued that "synthetic worlds" is a better term for these cyberspaces, but this term has not been widely adopted.http://secondlife.com/whatis/?lang=en-US#Education_&_Enterprise
  • http://www.techrepublic.com/blog/project-management/social-coding-the-next-wave-in-development/3257Social coding -- the next wave in developmentBy Rick FreedmanJuly 19, 2011, 1:44 PM PDTTakeaway: New social coding tools are enabling a revolution in product development. Rick Freedman reveals what he sees as the key stumbling block for social development.In my last post, I pondered some of the new roles that project managers (PMs) will take on as projects become more distributed, dispersed, and diverse. I’m fascinated by the idea of “social coding,” software development by a distributed, virtual community. I decided to do a little research on the tools and technologies that are enabling this new way of thinking about product development.I’ve seen development environments that contain most of the components of a social coding platform, such as code repositories, wikis, tracking tools, personal profiles, RSS feeds, project microsites, and discussion threads, but they’re frequently rigged together from disparate, incompatible parts. Application lifecycle management environments, which previously were focused on version control, now offer all of these social elements in a single, integrated hub or platform. These capabilities combine to enable a social community that can self-direct and deliver robust products, as open source environments like GitHub  have proven.GitHub claims to be the largest code host in the world, with two million repositories and one million users. According to GitHub’s mission statement,Code is about the people writing it. We focus on lowering the barriers of collaboration by building powerful features into our products that make it easier to contribute. The tools we create help individuals and companies, public and private, to write better code, faster.GitHub’s CEO and founder Tom Preston-Werner, said  recently:We like the ideas of social networking. We think that developers work more effectively when they work together. So let’s take the ideas of a social network and add on top of that code hosting, and let’s create a site that makes it easy to share and collaborate on code. We have a way to follow users, and we have a way to watch repositories. In your dashboard you can then see lists of all the people that you’re watching and the actions that they’re doing. If they make commits, if they edit a wiki, whatever they’re doing you can see that. You can get an RSS feed of that. You can see it in a bunch of different ways. It just allows you to keep tabs on what you find important in that eco-system.We also have the notion of profiles. So you can go to someone’s profile page and see a list of their repositories, when they’ve been working on them, so you can keep tabs on who’s working on what, and to what degree, and if they’re responsible for a project. It makes it easy to see who’s responsible for code in certain places.The capabilities that we’re accustomed to from social networks like Facebook , and those we often jury-rig together, like tracking and wikis, are now stacked on top of a robust, replicable code repository, to create an environment focused on development communities. Projects become virtual villages, with self-directed development occurring worldwide, and team structures emerging rather than being planned.Many enterprises shy away from public access repositories for security and control purposes. Products like CollabNet ’s TeamForge  and other Application Lifecycle Management tools target this enterprise space. CollabNet recently released a new version, 6.1, of TeamForge  that specifically touts the company’s “Be Social” philosophy. I chatted with Lothar Schubert, CollabNet’s senior director of product marketing for TeamForge, about the social elements of the product. He said:The idea of communities is based on the strong heritage of CollabNet in the open source community. Software development happens within the context of a team. The team must interact constantly not only with each other but with the code. Social coding requires the ability to provide collaborative authoring, using tools like wikis that are tied to specific artifacts, to the code. You need collaborative networking capabilities, such as discussion forums and RSS feeds. We also provide the capability to create social networks around shared communities, to build project microsites. Of course, last but not least is the repository of code.The social coding concepts that have been evolving in the open source community, and are now being integrated into the open source hubs like SourceForge  and GitHub, are migrating to the enterprise market, as companies recognize that even their internal projects can be managed as self-directed open source efforts.I also chatted about social coding and the new tools that enable it with Dave West, a Principal Analyst focused on Application Delivery Professionals for Forrester Research . When we discussed social coding, Dave commented:We’re social animals, and the development of stuff tends to be social in nature. We think a better way of thinking about it is as social development. It’s not just about the coding; it’s about enabling business change and delivering business value in a social way. It will be interesting to see how CollabNet, with tools like TeamForge, broadens the idea from social coding to social development. Social coding is a fundamental change.I asked Dave about the challenges of adopting these ideas:We’re seeing huge productivity gains from developers and testers. We’re now building the wrong software really well. How do we get teams to align to the business, how do we enable a culture of business agility across the enterprise? That’s the tricky stuff, as most companies are built around the factory, hierarchical model, not the collaborative model.Dave picks out the key stumbling block for social development. Every migration, as from ad hoc to PMI and CMM methods, and from PMI to agile and scrum, gets stuck on the human elements of resistance, agenda and, culture. These new social coding tools enable a revolution in product development through communities; the challenge is getting the organization and the project teams to think and act as communities.
  • http://www.techrepublic.com/blog/project-management/social-coding-the-next-wave-in-development/3257Social coding -- the next wave in developmentBy Rick FreedmanJuly 19, 2011, 1:44 PM PDTTakeaway: New social coding tools are enabling a revolution in product development. Rick Freedman reveals what he sees as the key stumbling block for social development.In my last post, I pondered some of the new roles that project managers (PMs) will take on as projects become more distributed, dispersed, and diverse. I’m fascinated by the idea of “social coding,” software development by a distributed, virtual community. I decided to do a little research on the tools and technologies that are enabling this new way of thinking about product development.I’ve seen development environments that contain most of the components of a social coding platform, such as code repositories, wikis, tracking tools, personal profiles, RSS feeds, project microsites, and discussion threads, but they’re frequently rigged together from disparate, incompatible parts. Application lifecycle management environments, which previously were focused on version control, now offer all of these social elements in a single, integrated hub or platform. These capabilities combine to enable a social community that can self-direct and deliver robust products, as open source environments like GitHub  have proven.GitHub claims to be the largest code host in the world, with two million repositories and one million users. According to GitHub’s mission statement,Code is about the people writing it. We focus on lowering the barriers of collaboration by building powerful features into our products that make it easier to contribute. The tools we create help individuals and companies, public and private, to write better code, faster.GitHub’s CEO and founder Tom Preston-Werner, said  recently:We like the ideas of social networking. We think that developers work more effectively when they work together. So let’s take the ideas of a social network and add on top of that code hosting, and let’s create a site that makes it easy to share and collaborate on code. We have a way to follow users, and we have a way to watch repositories. In your dashboard you can then see lists of all the people that you’re watching and the actions that they’re doing. If they make commits, if they edit a wiki, whatever they’re doing you can see that. You can get an RSS feed of that. You can see it in a bunch of different ways. It just allows you to keep tabs on what you find important in that eco-system.We also have the notion of profiles. So you can go to someone’s profile page and see a list of their repositories, when they’ve been working on them, so you can keep tabs on who’s working on what, and to what degree, and if they’re responsible for a project. It makes it easy to see who’s responsible for code in certain places.The capabilities that we’re accustomed to from social networks like Facebook , and those we often jury-rig together, like tracking and wikis, are now stacked on top of a robust, replicable code repository, to create an environment focused on development communities. Projects become virtual villages, with self-directed development occurring worldwide, and team structures emerging rather than being planned.Many enterprises shy away from public access repositories for security and control purposes. Products like CollabNet ’s TeamForge  and other Application Lifecycle Management tools target this enterprise space. CollabNet recently released a new version, 6.1, of TeamForge  that specifically touts the company’s “Be Social” philosophy. I chatted with Lothar Schubert, CollabNet’s senior director of product marketing for TeamForge, about the social elements of the product. He said:The idea of communities is based on the strong heritage of CollabNet in the open source community. Software development happens within the context of a team. The team must interact constantly not only with each other but with the code. Social coding requires the ability to provide collaborative authoring, using tools like wikis that are tied to specific artifacts, to the code. You need collaborative networking capabilities, such as discussion forums and RSS feeds. We also provide the capability to create social networks around shared communities, to build project microsites. Of course, last but not least is the repository of code.The social coding concepts that have been evolving in the open source community, and are now being integrated into the open source hubs like SourceForge  and GitHub, are migrating to the enterprise market, as companies recognize that even their internal projects can be managed as self-directed open source efforts.I also chatted about social coding and the new tools that enable it with Dave West, a Principal Analyst focused on Application Delivery Professionals for Forrester Research . When we discussed social coding, Dave commented:We’re social animals, and the development of stuff tends to be social in nature. We think a better way of thinking about it is as social development. It’s not just about the coding; it’s about enabling business change and delivering business value in a social way. It will be interesting to see how CollabNet, with tools like TeamForge, broadens the idea from social coding to social development. Social coding is a fundamental change.I asked Dave about the challenges of adopting these ideas:We’re seeing huge productivity gains from developers and testers. We’re now building the wrong software really well. How do we get teams to align to the business, how do we enable a culture of business agility across the enterprise? That’s the tricky stuff, as most companies are built around the factory, hierarchical model, not the collaborative model.Dave picks out the key stumbling block for social development. Every migration, as from ad hoc to PMI and CMM methods, and from PMI to agile and scrum, gets stuck on the human elements of resistance, agenda and, culture. These new social coding tools enable a revolution in product development through communities; the challenge is getting the organization and the project teams to think and act as communities.
  • http://www.zdnet.com/blog/hinchcliffe/the-big-five-it-trends-of-the-next-half-decade-mobile-social-cloud-consumerization-and-big-data/1811The "Big Five" IT trends of the next half decade: Mobile, social, cloud, consumerization, and big dataBy Dion Hinchcliffe | October 2, 2011, 5:34pm PDTSummary: In today’s ever more technology-centric world, the stodgy IT department isn’t considered the home of innovation and business leadership. Yet that might have to change as some of the biggest advances in the history of technology make their way into the front lines of service delivery. Here’s an exploration of the top five IT trends in the next half decade, including some of the latest industry data, and what the major opportunities and challenges are.“Much or most of these topics are in back burner mode in many companies just now seeing the glimmerings of recovery from the downturn.Much has been written lately about the speed at which technology is reshaping the business landscape today. Except that’s not quite phrasing it correctly. It’s more like it’s leaving the traditional business world behind. There are a number of root causes: The blistering pace of external innovation, the divergent path the consumer world has taken from enterprise IT, and the throughput limitations of top-down adoption.As a result, there’s a rapidly expanding gap between what the technology world is executing on and what the enterprise can deliver. Many now think this gap may actually become untenable, and they may be right. Yet recent large surveys of CIOs continues to show an almost exclusively evolutionary and internal focus. Many feel that a technology emphasis is wrong right now, and they’re certainly right, if it’s not integrated with top priority business objectives. However, these days it’s technology advancements and new digital markets that are often the key to an organization’s future.At the end of the day, businesses must be able to effectively serve the markets they cater to, and doing so means using the same channels and techniques as their trading partners and customers. Organizations must adapt to the evolving marketplace to succeed. Fortunately, I do believe there are approaches that can yet be adopted to address this increasingly significant challenge.A tectonic technology shiftOne only need look at what’s on the mind of CIOs these days (60% believe they should bedirectly driving growth and productivity) versus what they’re well known for delivering on. Or perhaps more problematically, what their IT organizations are able to deliver on. Never in my two decades of experience in the IT world have I seen such a disparity between where the world is heading as a whole and the technology approach that many companies are using to run their businesses.The issues are legion: There are at least five major “generational scale” changes to the computing landscape happening at about the same time: Delivery platforms are shifting (mobility, cloud, social), communication and collaboration channels are being reinvented (Web, mobile, social), the consumer world of technology is driving innovation, and data is opening up and exploding out of the proliferating apps, devices, and sensors that organizations are deploying or are connecting to (but alas, are often not engaging with.) And as you might expect, much or most of these topics are in back burner mode in many companies just now seeing the glimmerings of recovery from the downturn.Moreover, workers are now demanding many of these innovations and expecting their organizations to provide something close in capability to what they can get nearly for free (or actually for free) on their own devices and networks. Managers and executives, albeit mostly on the business side, are typically pushing for 1) service delivery on next-generation mobile devices like the iPad, 2) much easier to use IT solutions, and 3) access to better, more collaborative and useful intranet capabilities.“Easy”, highly mobile, and “social” are the mantras of this new generation of IT. So to is the rapid (read: instant) acquisition and delivery of business solutions. There is a growing realization amongst workers and management that technology, though increasingly complex in itself, can be wielded far more rapidly and efficiently than their currently parochial capabilities are providing.But this is not a blame game. IT is not necessarily at fault, or at least only indirectly. Instead, it seems to be the entire structure and process through which organizations absorb and metabolize technology. It’s centralized. It’s controlled. It’s top-down. There are exceptions, but in most organizations, technology decisions are made at high levels and then pushed across the organization. This transmission process is slow and unpredictable. It’s also often not supported on the ground where reality reaches the business.Unfortunately, the slow-pace of IT adoption, hindered by traditional project management practices, endless customization processes, IT backlogs, security concerns, and a dozen other drags on delivery performance, is only part of the problem. The fact that the technology world is largely no longer driven by the enterprise world (as it used to be for decades) is another major reason that technology and business is having a harder time these days aligning.A few examples will suffice: The endless and seemingly real-time flow of useful and highly innovative new mobile and Web apps for managing travel, money, news, communication, productivity, and countless other key functions is only an inadequate trickle in the enterprise today. The ability to quickly connect, communicate, and collaborate via social conversations, photos, audio, video, and more with anyone in the world is much more limited currently in most businesses. Finding and acquiring new software is just the click of a button in an app store in the consumer world, but an arduous, manual, and failure prone process in most organizations now. User experiences are changing: The aging and slow-to-evolve graphical user interface is being uprooted by touch based interfaces in new consumer apps that work much better in many physical situations. In contrast, the same overhaul is happening an order of magnitude more slowly for business apps.Where does technology and IT go from here?If we project these trends forward, what will the outcome be? Is there going to be a final fork in the road for consumer and enterprise technology, with each side looking at each other through a diverging pair of windows, with minimal crossover between the two? Or will the two worlds continue to blur together, as technology cross-pollinates from the growing wall of innovation coming from the Web and consumer technology world? Given the virality and pervasiveness of consumer technology, the latter is by far the most likely scenario.So what are the key IT trends of the next half decade? How will organizations adapt to them? In a conversation I had recently with the Editor-in-Chief of CIO Magazine, Maryfran Johnson, we discussed what I dubbed the “Big Five”, the biggest technology influences of the next half decade. This includes next-gen mobility, social media (or more specifically social business), cloud computing, consumerization, and big data. We agreed that these five — of all current tech trends — are at top of the list for what most organizations need to be planning for in their current strategies and roadmaps as they update and modernize, as well as (hopefully) out-innovate their competitors.Below I will explore the approaches that might break the logjam that’s preventing much of the business world from becoming as current with the technology advances as they should. But first lets take a look at each of these technology trends with an eye towards the most up-to-date statement of the advantages they can provide. I’ll also provide a key new insight on overcoming the challenges of adopting them more effectively and successfully.1) Next-Gen Mobile - Smart Devices and TabletsIt’s obvious to the casual observer these days that smart mobile devices based on iOS, Android, and even Blackberry OS/QNX are seeing widespread use. But comparing projected worldwide sales of tablets and PCs tells an even more dramatic story. Using the latest sales projections from Gartner on tablets and current PC shipment estimates from IDC, we can see that by 2015 the tablet market will be 479 million units and the PC market will be only just ahead at 535 million units. This means tablets alone are going to have effective parity with PCs in just 3 years. Other data I’ve seen tells a similar story.So, while it’s still early days yet, it’s also quite clear that enterprises must start treating tablets as equal citizens in their IT strategies. So why won’t they? For several reasons:Challenges to smart device adoptionSmart devices have a poor enterprise ecosystem today. Enterprise software vendors and IT departments have organized around older platforms such as Windows and LAMP. Their infrastructure, skills, and relationships are largely built around an older generation of IT. In the meantime, iOS and Android have a lot to learn and to build up to begin to match this world, though they are starting to make progress in this regard.Many of the inherent advantages of smart mobile are anathema to structured IT. From app stores to HTML 5, the large and easy to access application universes of next-gen mobile immediately triggers a security lockdown response (right reaction, wrong response) from IT. I’ve even seen IT departments desire to remove app stores from smart mobile devices entirely. The solution is probably policy-based screening of apps, but that’s a solution a ways away.Key adoption insightA likely approach that will scale is to do as JP Rangaswami advocates, and “design for loss of control.” This doesn’t mean letting go of essential control such as robust security enforcement, but it does mean providing a framework for users to bring their own mobile devices to work in a safe manner, including use of apps with business data under certain prescribed conditions. This unleashes choice and innovation and vitally, splits the work of adoption and rollout with users that want to use their favorite mobile devices/app to solve a business problem.2) Social Media - Social Business and Enterprise 2.0While mobile phones technically have a broader reach than any communications device, social media has already surpassed that workhorse of the modern enterprise, e-mail. Increasingly, the world is using social networks and other social media-based services to stay in touch, communicate, and collaborate. Now key aspects of the CRM process are being overhauled to reflect a fundamentally social world and expecting to see stellar growth in the next year. As Salesforce’s Marc Benioff was very clear in his dramatic keynote at Dreamforce last month, leading organizations are becoming social enterprises.There now seems to be hard data to confirm this view: McKinsey and Company is reporting that the revenue growth of social businesses is 24% higher than less social firms and data from Frost and Sullivan backs that up across various KPIs. The message is that companies are going to — and have every reason to — be using social media as a primary channel in the very near future, if they aren’t already. It’s time to get strategic.Challenges to social media adoptionSocial media is not an IT competency. Simply put, the human interaction portion of social computing is generally not IT’s strong suit. It tends to be treated as just another application to roll out instead of being integrated meaningfully into the flow of work.The more significant value propositions of social requires business transformation.Maintaining a Facebook page and Twitter account is relatively straightforward and necessary, but it usually won’t generate significant growth, revenue, or profits by itself either. The moreprofound and higher order aspects of social media including peer production of product development, customer care, and marketing require deeper rethinking of business processes.Key adoption insightThere are a growing number of established social media adoption strategies, but probably one of the most effective is to engage by example. Both leadership inside the company as well as top representatives to the outside world must engage in social channels to show how they’d like change to happen.Related: Reconciling the enterprise IT portfolio with social media3) Cloud computingOf all the technology trends on this list, cloud computing is one of the more interesting and in my opinion, now least controversial. While there are far more reasons to adopt cloud technologies than just cost reduction, according to Mike Vizard perceptions of performance issues and lack of visibility into the stack remain one of the top issues for large enterprises. Yet, among the large enterprise CTO and CIOs I speak with, cloud computing is being adopted steadily for non-mission critical applications and some are now even beginning to downsize their data centers. Business agility, vendor choice, and access to next-generation architectures are all benefits of employing the latest cloud computing architectures, which are often radically advanced compared to their traditional enterprise brethren.Challenges to cloud computing adoptionConcerns of control. When jobs depend on IT being up and working, then you can be sure there will be reluctance to adopt the cloud. There’s also little question that not going the cloud route will mean short-term job security, but at what ultimate cost? Never mind that many CIOs and heads of IT just feel they can’t yet trust the cloud, despite many cloud providers being more reliable than internal infrastructure (Google recently reported four nines across its Gmail and Google Apps services.)Reliability and performance perceptions. Widespread outages by Amazon and Microsoft in the past has set back cloud adoption a minor amount, yet uptime is still extraordinary good by most enterprise standards. More of an issue is moving the enormous datasets that enterprises now posses into and out of the cloud quickly enough. Backhaul and other methods will need to improve substantially to address this satisfactorily for large enterprises.Key adoption insightUntil cloud computing workloads can be seamlessly transferred back and forth between a company’s private cloud and public/hybrid cloud, adoption will be held back and favored largely for greenfield development. Technologies are now emerging to make this possible, however, and for now, companies should invest in cloud standards (to the extent they exist today) to build private clouds in order to be in position to start selectively transferring services out on a trial basis (and being able to bring them back in safely as needed.)Related: Fixing IT in the cloud computing era.4) Consumerization of ITI’ve previously made the point that the source of innovation for technology is coming largely from the consumer world, which also sets the pace. Yet that’s just one aspect of consumerization, which some like myself and Ray Wang are calling “CoIT” for short. Consumerization also very much has to do with its usage model, which eschews enterprise complexity for extreme usability and radically low barriers to participation. Enterprises which don’t steadily consumerize their application portfolios are in for even lower levels of adoption and usage than they already have as workers continue to route around them for easier and more productive solutions. Another decentralized and scalable solution is, as with next-gen mobile, to help workers help themselves to third party apps that are deemed safe and secure.Challenges to applying consumerization to ITVendors provide the UX. Usability and low barriers to participation won’t exist until 3rd party vendors, which provide a large percentage of IT (often on lengthy upgrade intervals), get the message and overhaul their apps.Consumer technology often isn’t enterprise ready. At one point, neither was open source, but eventually an industry that provided value-added services emerged. The same pattern is likely to happen with popular consumer apps.Key adoption insightConsumerization seems especially pernicious to IT departments because it happens all the time, without their involvement. Stats vary on “shadow IT”, which is in the lower double digits, but much of it is for consumer apps. IT departments can begin programs in partnership with other large companies (to distribute the work) to certify SaaS, cloud, and mobile apps and train workers on data safety, backup, and integrity for example. Longer term, companies will imbue their IT service design, solution acquisition, and delivery with user experience and design approaches and fresh ideas from the consumer world. This will drive more worker productivity, less user support, and higher innovation in IT solutions.5) Big dataBusinesses are drowning in data more than ever before, yet have surprisingly little access to it. In turn, business cycles are growing shorter and shorter, making it necessary to “see” the stream of new and existing business data and process it quickly enough to make critical decisions. The term “big data” was coined to describe new technologies and techniques that can handle an order of magnitude or two more data than enterprises are today, something existing RDBMS technology can’t do it in a scalable manner or cost-effectively.Big data offers the promise of better ROI on valuable enterprise datasets while being able to tackle entirely new business problems that were previously impossible to solve with existing techniques. While most companies are still addressing their big data needs with data warehousing, according to Loraine Lawson, one need only scan the impressive McKinsey report on Big Data to see the major opportunities it offers on the business side.Related: The enterprise opportunity of Big Data: Closing the “clue gap”Challenges to adopting big dataBig data requires many new skills. There are a host of advanced technologies and new platforms to learn to be effective with big data, and the IT departments I’ve spoken with are concerned about the skills they must acquire or foster internally to take advantage of them.Meaningful use of big data requires considerable cross-functional buy-in. Big data requires tapping into silos, warehouses, and external systems using new techniques. SOA has similar challenges because it had to coordinate and align so many parts of the business. While some big data will be single function, many of the more intriguing possibilities requires a lot of cooperation across the business and with external vendors, not at easy task.Key adoption insightBig data requires a mindset change as much as a technology update. This means making open data a priority for the enterprise as well as an operational velocity that hasn’t been a priority before. Big data enables solving new business problems in windows that weren’t possible before. It also means infrastructure, ops, and development must be part of the same team and used to working together. This means organizational refinements must be made to tap into the greater potential.How IT can evolve to meet the Big FiveI’m beginning to see that in order to stay relevant, and not become the PBX department, IT departments must be prepared to take a “Big Leap” to meet the Big Five. What this Big Leap looks like will be different for every organization, and their are multiple directions that can be taken. As I wrote on Twitter recently, the deeply transformational nature of most of the Big Five means IT must either start leading the business models and evolution of the organization, or become a commoditized utility while the business figures out the moves on their own. This almost certainly means open supply chains and enabling strategic IT abundance via designed loss of control coupled with emergent and agile approaches to IT. Now that I’ve explored the Big Five, I’ll take a look at the Big Leap soon and see what the options are for IT — such as “The Next Generation Enterprise Platform” that Michael Fauscette recently posited — to not only remain relevant in the 21st century, but become the driver of business.
  • Transcript

    • 1. Web 2.0 i Društvene mreže (i nove tehnologije za obrazovanje) Vesna Petković VISER 2013-12-25
    • 2. Social media u 2013. god • http://vator.tv/news/2013-12-21-the-top-10social-media-events-of-2013 • http://socialmediatoday.com/johnkultgen/1969566/top-10-social-mediapredictions-2014-based-2013-moments • http://socialmediatoday.com/robincarey/2020016/death-social-or-you-aintseen-nothing-yet • http://mashable.com/2013/12/22/top-10social-media-brands-2013/?utm_cid=mashcom-fb-main-photo 2
    • 3. Forrester-ova studija o korišćenju socijalnih medija • Većina Amerikanaca i Evropljana samo koriste sadržaje a ne kreiraju ih • Dok je u Aziji drugačija situacija, 75% odraslih korisnika iz gradskih sredina u Kini i Indiji kreiraju sadržaje 3
    • 4. • http://vator.tv/news/2013-04-10-teen-interestin-established-social-media-is-declining 4
    • 5. Web 2.0 Web 2.0 predstavlja verziju Weba čija je osnovna karakteristika aktivno učešće korisnika u kreiranju sadržaja na Web sajtu. Ovo učešće ogleda se u kreiranju i ureĎivanju sadržaja, ocenjivanju, komentarisanju i dodavanju tagova. Na taj način svi korisnici raspolažu novim informacijama, za koje kreatori sadržaja smatraju da su od značaja. 5
    • 6. Malo istorije (…. i pogled u budućnost) • WWW • Tim Berners-Lee • Web 3.0 i Semantic Web • http://www.youtube.com/watch?v=off08As3siM http://webthreeoh.wordpress.com/2011/05/ 18/the-semantic-web-layer-cake/ 6
    • 7. WWW prvi Web browser i editor WorldWideWeb (bez praznina) prvi je Web browser koji je napravio Tim Berners-Lee u European Particle Physics Laboratory (CERN) u Švajcarskoj. Kasnije Web browser dobija naziv Nexus, kako bi se napravila razlika izmeĎu njega i globalnog hipertekst sistema World Wide Web-a, čija se upotreba masovno. Danas se Web koristi kao alat za saradnju i komunikaciju ljudi, omogućavajući im bržu i lakšu razmenu različitih digitalnih sadržaja, ideja i znanja. 7
    • 8. 8
    • 9. Termin Web 2.0 • Sam termin Web 2.0 prvi je upotrebio Tim O’Reilly, osnivač i direktor O’Reilly Media Inc. • Termin Web 2.0 se odnosi na web aplikacije koje omogućavaju laku razmenu informacija, interoperabilnost, dizajn usmeren na korisnika i kolaboraciju na World Wide Web-u. • http://www.youtube.com/watch?v=CQibri7gp LM 9
    • 10. Web 2.0 uvod • Web 2.0 sajt omogućava korisnicima da stupaju u meĎusobne interakcije i da saraĎuju u dijalogu vezanom za socijalni medijum, kao kreatori (prosumers) korisnički generisanog sadržaja u virtuelnoj zajednici. • Ovo se razlikuje od consumer-a web sajta koji su ograničeni na pasivno posmatranje sadržaja koji je kreiran za njih. • Primeri Web 2.0 uključuju: blogove, sajtove socijalnih mreža, wiki-e, mashup-e i folksonomije. 10
    • 11. Web 2.0 uvod http://oreilly.com/web2/archive/what-isweb-20.html 11
    • 12. Web 2.0 meme map http://oreilly.com/web2/ 12 archive/what-is-web20.html
    • 13. The Web as a platform • Web kao platforma • Aplikacije se kreiraju na Web-u, a ne na desktopu. • Fokus nije na softveru (kao kod stare Netscape-ove softverske paradigme webtop), tj. na web browseru, već na pružanju usluga koje su zasnovane na podacima kao što su linkovi ka Web stranama koje ih povezuju • Netscape vs. Google 13
    • 14. User centered http://www.time.com/time/magazine/article/ 0,9171,1570810,00.html 14
    • 15. User centered • You control your own data Grounding requirements and design decision making in observed behavioral data. Using user advocacy as a means to product development… For Websites, portals, Intranets. Software, Web applications. Embedded or mobile devices, hardware products. For end-to-end experiences. 15 15
    • 16. Softver koji postaje bolji što ga više ljudi koristi • Korisnici poboljšavaju softver Ključ uspeha internet aplikacije leži u količini sopstvenih podataka koje joj korisnici dodaju • Stoga: ne treba ograničavati ―arhitekturu participacije‖ pri razvoju softvera. • Korisnike treba što više uključivati i implicitno i eksplicitno, da dodaju vrednost aplikaciji. 16
    • 17. Architecture of participation • Fraza koju je takoĎe skovao Tim O'Reilly i koja se koristi da opiše prirodu sistema koji su dizajnirani kako bi korisnici mogli da doprinose dodavanjem sopstenog sadržaja. • To je koncept vezan za Web 2.0 po kome zajednica korisnika doprinosi sadržaju ili dizajnu i procesu razvoja softvera. http://www.oreillynet.com/pub/a/oreilly/tim/articl es/architecture_of_participation.html • Linus Torwalds u svom eseju iz 1998. g. takoĎe naglašava značaj arhitekture softvera koja je doprinela da se Linux razvija putem meĎusobne saradnje programera • HTML i view source opcija u browseru 17
    • 18. Long tail • Long Tail se odnosi na statističko svojstvo da veći deo populacije potpada pod ―rep‖ raspodele verovatnoće, nego ispod normalne ili Gausove raspodele • Troškovi distribucije i skladištenja se odnose na ovu strategiju, jer im ona ukazuje da prodajom manjih količina više vrsta retkih proizvoda većem broju korisnika, umesto prodajom velikog broja manje vrsta popularnih proizvoda stvara veći profit • Ukupna količina prodatih ―nepopularnih‖ proizvoda se zove Long tail. 18
    • 19. Long Tail Theory and Recommender Systems • Teorija Long tail-a vezuje se za e-commerce sajtove kao što su Amazon i eBay koji korišćenjem tehnike recommender system-a prodaju retke i nepopularne proizvode kupcima koji ih žele. • Suština je u povećanju profita ukoliko uspeju da prodaju takve proizvode, a razlog je što se cena ne vezuje za ostale ponuĎače, za razliku od popularnih proizvoda koji imaju istu cenu kod svih ponuĎača • Recommender system se ogleda u kalkulacijama pomoću kojih se pronalaze kupci, a traženi proizvodi im se pojavljuju na personalizivanim početnim stranicama ili u email-ovima koji im se šalju ili preko cross selling –ponuda sličnih proizvoda 19
    • 20. Services not packaged software • Primer koji je dao O'Reilly u svom blogu je Google • To je višeplatformski servis, koji se koristi na bilo kojojm OS • Nije potrebno da se kupi neki softver da bi se koristio • Uključuje specijalizovanu bazu informacijarezltati pretrage, koja radi u skladu sa softverom mašine za pretragu • Bez baze aplikacija je beskorisna, a bez softvera baza ima previše podataka koji ne mogu da se pretraže 20
    • 21. Remixable data source and data transformation • http://tenbyten.org/10x10.html • Da li se desilo nekad da ste videli neki web sajt i pomislili ovo je tačno ono što sam tražio/la samo kad bi imalo i ovo i ovo.... • Dapper • Mashary • Yahoo pipes • Programmable Web • Mashup 21
    • 22. Hackability • Korišćenje alata kako želimo • Za pristup podacima • Kombinovanje sadržaja • Ispravka, poboljšanje alata • Izmišljanje novih alata • Odgovor na korisničke zahteve • Hackability je sposobnost alata ili naprave da se modifikuje na način koji nije prvobitno zamišljen, kako bi korisnici mogli na nov način da ga koriste 22
    • 23. Mashup • Mashup je Web aplikacija proizašla kombinovanjem podataka i/ili servisa iz više izvora. • Mashup se može definisati kao: • Web stranica koja koristi Web tehnologije, pre svega JavaScript, PHP i XML, i koja predstavlja informacije iz različitih izvora, ili ih prikazuje na različit način. • Na ovaj način korisnik ne mora da posećuje veliki broj lokacija na Internetu da bi23 dobio odreĎenu informaciju
    • 24. Muzički Mashupi • https://www.youtube.com/watch?v=kspPE9E 1yGM • https://www.youtube.com/watch?v=HJMapA 8WgYw 24
    • 25. Korišćenje kolektivne inteligencije • Razlika izmeĎu korisnički generisanog sadržaja i kolektivne inteligencije • Wikipedia vs. Encyclopedia Britannica • Web sajtovi evoluiraju i postaju sve bolji kako korisnici doprinose svojim sadržajima. • Wikipedia je primer kod koje zajednica dobro informisanih korisnika nadgleda i održava sajt. • Svako može da doprinese i da postavi netačnu informaciju slučajno ili namerno. Tako da ne postoji mogućnost da se garantuje za tačnost informacija, i svako je odgovoran za informacije koje je postavio http://radar.oreilly.com/2006/11/harnessingcollective-intellig.html 25
    • 26. Rich user expirence • Oduvek je u okviru web stranice postojala dinamika koja je prvobitno bila omogućena apletima • Macromedia je 2005. predstavila izraz Rich Internet applications kako bi naglasila nove mogućnosti njihovog alata – Flash • AJAX (Asynchronous JavaScript and XML) u svom eseju Jesse James Garret predstavlja kao kombinaciju nekoliko tehnologija 26
    • 27. AJAX • Prezentacija zasnovana na standardima korišćenjem XHTML and CSS; • Dinamički prikaz i interakcija korišćenjem Document Object Model; • Razmena i manipulacija podacima korišćenjem XML i XSLT; • Asinhrono dobavljanje podataka korišćenjem XMLHttpRequest; • i JavaScript koji sve to spaja. 27
    • 28. AJAX • AJAX (Asynchronous JavaScript and XML), jedna je od najznačajnijih tehnologija koja je omogućila da Web aplikacije dobiju funkcionalnosti i izgled koji postoje u desktop aplikacijama. • Ključni termin u akronimu AJAX je asinhroni (eng. asyinchronous). Odnosi se na činjenicu da se sadržaj u browseru može učitati na asinhroni način. To znači da više ne postoji potreba za ponovno učitavanje celih stranica, što je vrlo uobičajeno na Webu. • Na ovaj način AJAX donosi novu paradigmu interakcije korisnika sa browserom. 28
    • 29. Perpetual beta • Perpetual beta predstavlja održavanje softvera ili sistema u beta fazi razvoja produžen ili neodreĎen period vremena. • Često ga koriste programeri kada nastave da izdaju nove mogućnosti koje nisu u potpunosti istestirane • Kao rezultat ovakav softver se ne preporučuje za kritične poslove • MeĎutim mnogi sistemi koji funkcionišu koriste ovakav način razvoja i izdavanja softvera , koji je agilniji i brži. 29
    • 30. Perpetual beta • Kod ovakvog načina razvoja stalni update-i su osnova prijemčivosti i korisnosti servisa. • Korisnici se smatraju partnerima, u skladu sa open source načinom razvoja softvera (čak i ako se softver ne razvija na taj način) • Open source doktrina ―release early release often‖ se pretvorila u perpetual beta, još radikalniju formu • Nije slučajno što su Gmail, Google maps, Flickr, delicious i slični drže Beta logo godinama. 30
    • 31. Data as the “Intel inside” • Sve značajne Internet aplikicija do danas je bila podržana sa specijalizovanom bazom podataka: Google-ov web crawl, Yahoo! directory (and web crawl), Amazon ova baza proizvoda, eBay baza proizvoda i prodavaca. • privatnost? 31
    • 32. Tagging i folksonomy • Tag je ne-hijerarhijska ključna reč ili termin koja je dodeljena nekoj informaciji (kao što je Internet bookmark, slika, ili fajl) • Ovakav metapodatak pomaže da se opiše neka stvar i omogućava njeno lakše pronalaženje • Postupak tagovanja je popularizovan pojavom web sajtova povezanih za Web 2.0 filozofijom i jedna je glavnih karakteristika mnogih Web 2.0 servisa. 32
    • 33. 33
    • 34. Tagging i folksonomy • Omogućava korisnicima da zajedinički klasifikuju i pretraže informacije • Tag clouds je način vizuelizacije folksonomije • 43 things 34
    • 35. Tagging i folksonomy • Folksonomija (kolaborativno tagovanje, socijalna klasifikacija) je sistem klasifikacije koja potiče od prakse i metoda kolaborativnog kreiranja i upravljanja tagovima kako bi se označili i kategorizovali sadržaji. • Ovaj izraz je skovao Thomas Vander Wal,kao kombinacija reči folk i taxonomy. 35
    • 36. Social bookmarking • Social bookmarking servisi omogućavaju korisnicima da kreiraju sopstvene tagovane bookmarks-e na deljenim Web lokacijama, na kojima se slični tagovani bokmarks-i agregiraju. • Kako se masa korisnika povećava tako kolekcija postaje bolja i bogatija, i postaje više od direktorijuma linkova. • Ona agregira informacije od nekoliko korisnika, i omogućava razmenu informacija i izgradnju zajednica ljudi sa preklapajućim interesovanjima 36
    • 37. Social bookmarking • http://delicious.com/vesnap/oitsocial 37
    • 38. Page rank • PageRank je algoritam za analizu linkova, koji je dobio ime po Larry Page-u • Koristi ga Google-ova mašina za pretragu, koja dodeljuje numeričke težine svakom elementu povezanog skupa dokumenata, kao što je World Wide Web, sa ciljem ―merenja‖ njegove relativne važnosti u okviru https://plus.google.c om/1061897234440 98348646/posts • Numerička težina koja se dodeljuje svakom elementu je PR(E) a čita se kao PageRank od E • http://www.prchecker.info/check_page_rank. php 38
    • 39. eBay • eBay predstavlja online tržnicu, forum za razmenu dobara. • Feedback sistem na eBay-u pita svakog korisnika da da svoje mišljenje (pozitivno ili negativno) o osobi sa kojom je izvršilo transakciju • Pošto isključivo pozitvna povratna informacija popravlja reputaciju korisnika, a tada i ostali korisnici žele da ―posluju‖ sa njim, korisnici se ohrabruju da se ponašaju na prihvatljiv način, tako što se pravično ponašaju jedni prema drugima, i prodavci i kupci 39
    • 40. Amazon • Amazon omogućava korisnicima da daju prikaz i mišljenje na web stranici svakog proizvoda • Oni ih i ocenjuju na skali ocene od 1 do 5 zvezdica • Korisnici dobijaju bedževe za svoje prikaze • Prikaz koji ima najviše glasova prikazuje se na stranici proizvoda. • Amazon meĎutim koristi isti prikaz za proizvode iz različitih izdanja ili verzija i to je loša strana 40
    • 41. Blogs • http://www.readwriteweb.com/enterprise/201 2/01/new-models-for-web-publishing.php • Najjednostavnije rečeno blog je lična strana u formatu dnevnika • MeĎutim ono što ga izdvaja je hronološka organizacija sadržaja • Još jedna stvar koja je mnogo doprinela je pojava RSS tehnologije • Permalink je omogućio diskusiju o napisanom u blogu, on predstavlja način da se povezuju blogovi 41
    • 42. Blogs • Blogosphera zbog svoje meĎusobne povezanosti blogova predstavlja neku vrstu svesnosti Interneta. • Blogging koristi kolektivnu inteligenciju kao neku vrstu foltera. • To je ono što James Surowiecki zove ―the wisdom of the crowds‖, i kao što PageRank daje bolje rezultate za pojedinačni dokument tako i kolektivna pažnja blogosfere selektuje ono što ima vrednost 42
    • 43. Google adsense • Google AdSense je besplatni program koji opunomoćuje online izdavače da zarade tako što prikazuju relevantne ekrane na web stranama koje prikazuju • Rezultate pretrage - Lako dodavanje neke mašine za pretragu na sajt i zaraĎivanjem od reklama preko stranica za pretragu • Websites – prikaz reklama koje odgovaraju interesovanjima publike i zarada od validnog praćenja linkova ili utisaka • Povezivanje mobilnih korisnika sa pravom reklamom u pravo vreme dok pretražuju https://accounts.google.com/Ser 43 stranu u pokretu. service=adsense&rm=hide&nui= ue&ltmpl=adsense&passive=fals
    • 44. Wikipedia radical trust • Wikipedia, online encikolopedija koja je zasnovana na naizgled neverovatnoj činjenici da bilo ko može napisati bilo šta o bilo čemu, a da neko drugi može ureĎivati taj sadržaj • Ona predstavlja značajnu promenu u dinamici vezanoj za kreiranje sadržaja • http://www.alexa.com/topsites 44
    • 45. Bit torrent • Bit torrent je vezan za karakteristiku Web 2.0 tehnologija opisanu kao radikalna decentralizacija • Svaki klijent je i server, fajlovi se dele na fragmente koji se nalaze na različitim lokacijama, transparentno koristeći mrežu ljudi koji downloaduju koji pružaju i mrežni protok i podatke ostalim korisnicima • Što je fajl popularniji , to će se brže downloadovati 45
    • 46. Bit torrent • P2P pokret, koji ima značajnu ulogu decentralizaciji Interneta • BitTorrent demonstra ključni princip Web 2.0: servis automatski postaje bolji što ga više ljudi koristi • Dok Akamai mora da dodaje servere kako bi poboljšao uslugu, Bit torrent korisnici donose sopstene resurse na ―žurku‖ • Ovde se javlja implicitna arhitektura participacije ugraĎena etika kooperacije, gde svaki servis igra ulogu inteligentnog brokera, koji povezuje ivice i koristi snagu sopstvenih korisnika 46
    • 47. Social media i marketing • Social media je zajednički termin koji definiše različite aktivnosti koje integrišu tehnologiju, društvene interakcije i kreiranje reči, slika , video i audio sadržaja. • Monolog-dijalog • Preporuke ―IN 2008, IF YOU’RE NOT ON A SOCIAL NETWORKING SITE, YOU’RE NOT ON THE INTERNET.‖ 47
    • 48. • Tomorrow’s consumers are today’s ―digital natives.‖ 48
    • 49. Case Study : DELL • Showing that it works ! • Tim menadžera koji komuniciraju sa zajednicom preko blogova, foruma • Prate sva pominjanja Dell-a online (RSS, pretrage) • Dell nikada ne cenzuriše kritike na blogovima i brzo odgovara na kritike na svom blogu ili na drugim • Zaposleni u Dellu imaju dozvolu da komentarišu o svojoj kompaniji na blogovima, ali moraju koristiti svoje ime i identifikovati se kao zaposleni u Dellu • Od 2006. kada je Dell pustio u rad DirectDell blog i glavne online zajednice, negativna pominjanja Della su se spustila sa 50% na 20% • Dellova inicijativa vezana za zajednicu ima snažnu podršku izvršnog direktora Michael Della • Dell se rutinski citira na blogoviam, whitepaperima i konferencijama kao rpimer kompanije koja razume i 49 korsiti vrednosti koje pruža online zajednica
    • 50. Barack Obama vs. Toyota • Baraka Obamu nazivaju i prvim Internet predsednikom • U toku predsedničke kampanje na YouTube postavljani su njegovi govori u celosti (za razliku od dotadašnjih isečaka u TV vestima), a socijalne mreže omogućile su nov način komunikacije s potencijalnim glasačima (Barak Obama ima preko 10 miliona prijatelja na Facebook-u i MySpace-u, kao i preko 3,5 miliona korisnika koji ga prate na Twiteru) • http://www.nydailynews.com/news/politics/pres ident-obama-poses-funeral-selfie-article1.1543188 • https://twitter.com/BarackObama/status/26603 1293945503744 50
    • 51. • Toyota je propustila je priliku da svoju stranicu profila na Facebook-u iskoristi da predupredi stvaranje negativne slike u medijima 51
    • 52. H&M kampanja 2013 • http://marketingmreza.rs/hm-inovativanpristup-twitter-ooh-pr/ 52
    • 53. Jos #fail-ova u 2013. • http://standupstrategy.org/2013/09/20/lufthan safail-why-you-cannot-separate-customerservice-and-social-media/ • http://business.time.com/2013/11/14/jpmorga n-cancels-twitter-qa-after-an-epic-fail/ • http://www.prdaily.com/Main/Articles/Kmart_i ssues_repetitive_answers_to_Thanksgiving_ cr_15560.aspx# 53
    • 54. • Toyota nakon problema nastalih s delovima na pojedinim modelima automobila, koristi TwetMeme stranicu da bi povratila svoj ugled i poverenje svojih kupaca 54
    • 55. Tehnologije • U osnovi novih Web 2.0 tehnologija je AJAX (Asynchronous JavaScript and XML) • Servisi, kao što su Application Programming Interfaces (APIs), Representational State Transfer (REST), ili Really Simple Syndication (RSS) feeds, omogućili su nove načine pregleda i izmene podataka na Web sajtu. 55
    • 56. Gmail ,Google maps • Rich user expirience • Interface Gmail-a se razlikuje od ostalih Webmail sistema zato što je fokusiran na pretragu ii na ―conversation view‖ emaila, gde se slični odgovori grupiši na istoj strani • Kevin Fox user expirence dizajner iz Googlea je nameravao da korisnici imaju utisak da su uvek na istoj strani, a ne da moraju da se kreću po različitim stranicama da bi došli do željene informacije • Google Maps (ranije Google Local) je web mapping Google-ov servis koji je koji nudi mnoge tservise kao što su website, ride finder, tranzit i mape koje su priključene drugim sajtovim korišćenjem softvera Google 56
    • 57. REST • REST (Representational State Transfer) arhitektura se sastoji od klijenata i servera. • Klijenti iniciraju zahtev (request) serveru, a server procesira zahtev i šalje odgovarajući odgovor (response) • Zahtevi i odgovori se vezuju za transfer predstave resursa (transfer of resource representation) • Resurs može biti bilo šta što poseduje adresu • Representacija resursa je dokument koji sadrži trenutno i traženo stanje resursa 57
    • 58. REST • Samo ime ima za cilj da pobudi sliku kako se dobro dizajnirane web aplikacije ponašaju kao mreža web stranica (virtuelni automat) po kojoj se korisnik kreće tako što selektuje linkove (promena stanja-state transision)što ima za rezultat prikaz nove strane (novo stanje) 58
    • 59. JSON (JavaScript Object Notation) • Jednostavan format za razmenu podataka • Lak je za ljude da ga čitaju, i za mašine da ga parsiraju i generišu • To je tekstualni format koji je nezavisan od programskog jezika, ali koristi konvencije koje su poznate u većini programskih jezika • JSON je sagraĎen oko dve strukture: – Kolekcija parova ime vrednost – UreĎena lista vrednosti 59
    • 60. API • Application programming interface (API) je specifikacija izvornog koda koja ima za cilj da se koristi kao interfejs od strane softverske komponente da komunicira sa ostalim softv. Komponenatama • API može koristii rutine, strukture podataka, oo klase i promenljive • API specifikacija može uzimati više formi • Primeri... 60
    • 61. Crowdsourcing – made on the Web designed by us 61
    • 62. • Crowdsourcing predstavlja siguran način za outsource odreĎenih zadataka kako bi se kreirali odgovarajući rezultati • Postao je veoma privlačan i malim i velikim kompanijam • Predstavlja uključivanje velike količine ljudi koji žele da kreiraju odreĎene sadržaje za nekoga • http://mashable.com/2013/08/22/crowdsourc ed-tourism/ 62
    • 63. Mechanical Turk 63
    • 64. WIKI Culture and Collaboration • Wiki je web sajt koji korisnici mogu da modifikuju, dodaju i brišu sadržaj preko web browsera korišćenjem pojednostavljenog markup jezika ili rich text editora • Oni su obično podržani od strane wiki softvera i često se kolaborativno kreiraju, od strane više korisnika. • Wiki imaju veliki broj primena • Ward Cunnigham koji je razvio prvi wiki softver WikiWikiWeb originalno ga je opisao kao najjednostavniju online bazu podataka • Wiki na havajskom znači brzo • Jedna strana se zove wiki strana, a cela kolekcija obično povezana hiperlinkovima 64
    • 65. Kolektivna inteligencija • Ovaj termin podrazumeva učešće velikog broja ljudi, obično različitog porekla, navika i načina razmišljanja, u procesu kreiranja digitalnog sadržaja. • Na ovaj način uglavnom se dolazi do boljih odgovora na pitanja i rezultata pretraga, nasuprot procesu kada se mišljenja pojedinaca razmatraju odvojeno • Primer kolektivne inteligencije je i kreiranje folksonomija, kolaborativnog klasifikovanja sadržaja. 65
    • 66. Socijalne mreže • Socijalne mreže podrazumevaju formiranje virtuelnih zajednica, u kojima korisnici mogu da razmenjuju mišljenja, diskutuju, saraĎuju i raspravljaju na teme vezane za njihova zajednička interesovanja • Socijalne mreže se mogu definisati kao servisi na Web-u, koji omogućavaju pojedinicima da: – kreiraju javni ili polujavni profil u okviru ograĎenog sistema, – kreiraju listu ostalih korisnika sa kojima su povezani, – vide i istražuju sopstvenu listu veza sa ostalim korisnicima i veza koje su ostali kreirali u okviru sistema. 66
    • 67. Osnovne karakteristike socijalnih mreža • Osnovne karakteristike socijalnih mreža su: – formiranje on-line zajednice korisnika koji dele zajednička interesovanja, – kolektivna inteligencija, – korisnički generisan sadržaj. • Otvorenost je još jedna karakteristika svojstvena za Web 2.0, i odnosi se na distribuciju sadržaja i na jedinstvene formate podataka. Podržana je s Open Data API, a jedan od načina distribucije sadržaja je i putem RSS/Atom formata. • Open Data API jeste interfejs koji računarski program obezbeĎuje kako bi se omogućila interakcija sa drugim programima. 67
    • 68. Formiranje on-line zajednica • Prva priznata socijalna mreža bila je sixdegrees.com koja je startovala sa radom 1997. godine i zvanično zatvorena 2000. • Socijalne mreže predstavljaju otvoreno mesto gde se ljudi sastaju, saraĎuju i razmenjuju informacije. • Ono što je karakteristično za socijalne mreže su tzv. weak ties, koji predstavljaju poznanike, neko ko nije blizak prijatelj niti član porordice. • One su značajne zato što pružaju ljudima novo viĎenje stvari i mogućnosti koje se ne dobijaju od bliskih osoba. 68
    • 69. Korisnički generisan sadržaj • Na sajtovima socijalnih mreža korisnici su odgovorni za dodavanje i ažuriranje informacija koje su dostupne svima. • Ove virtuelne zajednice omogućavaju njenim korisnicima da razmenjuju mišljenja, diskutuju, saraĎuju i raspravljaju na teme • Korisnički generisan sadržaj jeste sadržaj web sajta koji kreiraju i ažuriraju korisnici • Korisnički generisan sadržaj promenio je način korišćenja Interneta, pa se umesto jednosmernog prenosa medijskog sadržaja od strane odreĎenog broja kompanija sadržaji kreiraju od strane korisnika i razmenjuju izmeĎu njih a ne konvencionalni 69
    • 70. Podela socijalnih mreža • Socijalne mreže mogu se kategorizovati prema vrsti socijalnog softvera i njegovoj primeni, na sledeći način: • Okruženja za multiplayer igre: • Multi-User Dungeons (MUDs), • Massively-Multiplayer Online Games (MMOGs). • Sistemi za razgovor: – Sinhnronizovani: • instant messaging (IM), i • chat (npr. Yahoo Messanger, Google™ Talk, Skype). – Asinhroni: • e-mail, • bulletin boards, • discussion boards, • sistemi za komentarisanje (npr. Slashdot). 70
    • 71. Podela socijalnih mreža • Sistemi za upravljanje sadržajem (eng. Content Management System): – blogovi, – wiki-i, – upravljanje dokumentima (npr. Plone™), i – web servisi za komentarisanje. • Sistemi za razvoj proizvoda, pre svega Open Source softvera: • Sourceforge.net®, • Libresource. • Razmena sadržaja ravnopravnih klijenata: • Gnutella, • BitTorrent™, • eMule, • iMesh. 71
    • 72. Podela socijalnih mreža • Sistemi za upravljanje prodajom/kupovinom: eBay™ • Learning management sistemi (LMS): – Blackboard, – WebCT, – Moodle. • Sistemi za upravljanje vezama (eng. Relationship management systems): • Friendster®, • Orkut. • Sistemi za sindikaciju: – list-servs, – RSS agregator (reader). • Distribuirani sistemi za klasifikaciju: • Flickr®, • delicious.com 72
    • 73. Social semantic web • Tehnologije Semantičkog Weba omogućavaju dodavanje strukturiranih podataka, koji su vezani za korisnički kreiran sadržaj. • Karakteristike Web 2.0 koje treba koristiti da bi se dodalo semantičko značenje sadržajima jesu: – preporuke - funkcionalnosti koje već postoje na Web 2.0 sajtovima, kao što je Amazon (kupci ovog proizvoda, kupili su i...), ili http://delicious.com (klasifikacija on-line sadržaja na osnovu tagova korisnika), – i personalizacija, koja se koristi kod prilagoĎenih feedova 73
    • 74. Social semantic web • Dva glavna nedostatka socijalnih mreža su: zatvorenost sadržaja u okviru sajta socijalne mreže, i nemogućnost razmene podataka o korisniku izmeĎu različitih socijalnih mreža. • Tehnologije Semantičkog Weba nude rešenje koje bi značajno uticalo na sadržaje kreirane tokom on-line aktivnosti korisnika socijalnih mreža. • Semantički Wiki koristi semantičke opise linkova i sadržaja na stranici, čime se postižu inteligentnija pretraga i navigacija. 74
    • 75. problemi • Fragmentacija razgovora • Nedostatak kontrole identiteta, kontakata i podataka vezanih za korisnika • Potreba za boljim socijalnim webom na mobilnim ureĎajima. • Loša integracija izmeĎu socijalnih mreža i servisa za pozicioniranje. • Poteškoće u istovremenom učestvovanju u socijalnim aktivnostima na više kanala. • OAuth standard (koji usvaja sve više online socijalnih mreža) ili korišćenje otvorenih API-a. 75
    • 76. Social network aggregators • jeste proces prikupljanja sadržaja nastalog na više socijalnih mreža. • Ovaj zadatak obično izvodi social network aggregator. • Social network aggregators olakšavaju upravljanje sadržajima nastalim na više socijalnih mreža kojima pripada jedan korisnik. • MeĎutim, oni ne rešavaju ključni problem vezan za socijalne mreže a to je nemogućnost slobodne razmene sadržaja izmeĎu različitih socijalnih mreža. • Problem je u nedostatku otvorenih i/ili meĎusobno povezanih podataka, jer svaka socijalna mreža koristi sopstveni format podataka. 76
    • 77. 77
    • 78. Location aware social networks • Geocodding je proces pronalaženja zajedničkih geografskih koordinata (geo. dužina i širina) iz drugih geografskih podataka, kao što su adresa, poštanski broj i sl. • Sa geografskim koordinatama stvari se mogu mapirati i uneti u geografski informacioni sistem ili se mogu uključiti u digitalne sadržaje, kao što su fotografije • Geotagging je proces dodavanja geografske identifikacije digitalnim sadržajima 78
    • 79. • Foursquare je socijalna mreža bazirana na lokaciji, ali i igrica povezana sa izlascima, gde korisnici dobijaju ekstra poene ukoliko prvi posete novo mesto, ako izlaze svako veče, ili ako dodaju nove informacije o restoranima ili klubovima • iPhone app takoĎe integriše recenzije sa Yelpa i Google mape 79
    • 80. Location aware social networks • Geosocial Networking je vrsta socijalnog umrežavanja u kome geografski servisi i mogućnosti geocodinga i geotagginga obezbeĎuju dodatnu dinamiku • Korisnci prilažu lokacijske podatke ili geolokacijske tehnike omogućavaju socijalnoj mreži da poveže i koordiniše korisnike sa ljudima u blizini ili sa dogaĎajima koji odgovaraju korisnikovim interesovanjima • Geolokacija može biti zasnovana na IP-u ili korišćenjem hotspot trilateration • Za mobilne socijalne mreže lokacijske informacije preko smsa ili praćenje mobilnog telefona mogu da omogućavaju servisu zasnovanom na lokaciji da obogati socijalnu mrežu 80
    • 81. Social Games/Game based learning • ―Social games‖ su igre zasnovane na webu kao što su Mafia Wars i FarmVille, a ne Xbox 360 akcione igre kao što su Gears of War ili Halo • to su multiplayer i asinhrone igre • Uključuju virtual goods - poslovni model koji je oblikovao tržište, gde se virtuelni novac kupuje i njime se kupuju dobra u igri. • Istraživanja kažu da su 55% igrača žene • i one češće igraju sa onima koje znaju 81
    • 82. • Već smo spomenuli Y generaciju ili millenials, digital natives • Oni već troše 350 milijardi dolara godišnje online • Pogodite gde najviše 82
    • 83. Game base learning • Game-based learning je privuklo dosta pažnje od 2003. godine kada je James Gee krenuo da objašnjava uticaj igranje igrice na kognirtivni razvoj • Od tada postoje istraživanja i interesovanje za uticaj igara ne • Povećao se broj različitih igara, pojavile su se različite platforme za igrice i igrice na mobilnim ureĎajima 83
    • 84. Virtual Worlds 84
    • 85. Virtual Worlds • To je online zajednica koja ima formu računarski zasnovanog simuliranog okruženja preko kog korisnici stupaju u meĎusobnu interakciju i koriste i kreiraju objekte • To su 3D virtuelna okruženja u kojima su korisnici preko avatara vidljivi jedni drugima • Ona moraju biti perzistentna, čak i kad korisnik napusti svet i sve promene koje je napravio moraju biti sačuvane • To su sinhronizovane, perzistentne mreže, a koje su omogućene korišćenjem umreženih računara. • MMOs i World of Warcraft 85
    • 86. Social codding • Razvoj softvera od strane distribuirane virtuelne zajednice • Code.google.com • https://github.com/ - najveći code host na svetu, sa preko 2 milijarde repozitorijuma i 1 milionom korisnika. • Programeri se povezuju i na taj način im se olakšava razmena ideja • Mercurial je alat koji je besplatan i služi za distribuiranu kontrolu verzije softverskog koda. • Svn 86
    • 87. Social codding 87
    • 88. Horizon Report 2011 trendovi u tehnologijama za obrazovanje 88
    • 89. Elektronske knjige • Pored novih ureĎaja za čitanje (ipad i samsung galaxy, kao i kindle) pojavljuju se različiti softveru koji donose revoluciju u čitanje knjiga • Elektronske knjige Nelson, Coupland, and Alice —kako će izgledati knjige u budućnosti • IDEO • Nova iskustva pri čitanju knjiga povezivanjem različitih diskusija, dodatna vrednost koja se dobija povezivanjem čitalaca • http://vimeo.com/15142335 • http://www.nytimes.com/2010/06/20/busines s/20unbox.html 89
    • 90. Education tools • https://voicethread.com/about/features/ • http://www.glogster.com/explore/Technology • http://animoto.com/education • http://prezi.com/ftv9hvziwqi2/coca-colacompany/ 90
    • 91. Mobile • Prema nedavnom izveštaju Ericsson-a do 2015. godine 80% ljudi pristupaće Internetu preko mobilnih ureĎaja • Ono što je značajno za obrazovanje je da će ove godine broj mobilnih ureĎaja prevazići broj računara • Za većinu ljudi u razvijenom svetu mobilni sa brzim pristupom Internetu je uvek dostupan • Mobilni se lako koriste za pregled web stranica • http://www.mbaonline.com/planet-text/ 91
    • 92. Augmented/blended reality • Poboljšana stvarnost mogućnost koja je prisutna već decenijama postaje dostupna • Slojevi informacija u 3D oktuženju nudi novo iskustvo sveta, nekada se naziva i pomešana (blended) realnost i ona omogućava da se računarstvo sve više pomera ka mobilnim ureĎajima, donoseći nova očekivanja povodom pristupa informacijama i nove mogućnosti za učenje • http://techpp.com/2011/10/24/30-bestaugmented-reality-apps-for-android/ 92
    • 93. Augmented/blended reality • AR se odnosi na dodavanje novog sloja korišćenjem računarskog sistema stvarnom svetu, kreirajući realnost koja je poboljšana • Ona se može koristi za vizuelne i vrlo interaktivne forme učenja, dodavanjem novog sloja podataka stvarnom svetu • Druga ključna karakteristika je njena mogućnost da odgovori na korisnički ulaz • http://www.google.com/landing/now/#cards 93
    • 94. Gesture based computing • Zahvaljujući Nintendo Wii-u, Apple iPhone-u i iPadu, mnogi ljudi su imali iskustvo sa računarstvom zasnovano na pokretima • John Underkoffler, koji je dizajnirao vizuelni interferjs iz filma Minority report je u svom obraćanju u TED talku 2010. godine pričao o stvarnoj verziji ovog interfejsa koga je nazvao GSpeak • http://www.youtube.com/watch?v=dyMVZqJk8s4 • Ovaj ureĎaj se već koristi za trening simulacije koje deluju kao njihovi stvarni pandani. 94
    • 95. Learning analytics • Se odnosi na interpretaciju velike količine informacija koji se proizvode i prikupljaju u ime studenata kako bi se procenio njihov akademski progres, predvideli budući rezultati i označili potencijalni problemi • SNAPP—Social Networks Adapting Pedagogical Practice • http://research.uow.edu.au/learningnetwo rks/seeing/snapp/index.html 95
    • 96. Internet of things • Postao je skraćeni naziv za ―pametne‖ objekte koji se mogu umrežiti kako bi povezali fizički svet sa svetom informacija • Pametni objekti imaju 4 ključne karakteristike: mali su, tako da se mogu zakačiti na bilo šta, imaju jedinstveni identidfikator malo memorijskog prostora u kome se mogu skladištiti indormacije i poseduju mogućnost za komunikaciju sa eksternim ureĎajima na zahtev • Internet of Things koristeTCP/IP, tako da se objekti mogu adresirati i pronaći na Internetu 96
    • 97. • "Big Five" IT trends of the next half decade: – Mobile, – social, – cloud, – consumerization, – and big data • http://www.zdnet.com/blog/hinchcliffe/thebig-five-it-trends-of-the-next-half-decademobile-social-cloud-consumerization-andbig-data/1811 97
    • 98. Instagram • http://socialmediatoday.com/rosssimmonds/2019291/why-instagram-will-beking-social-media-2014 98

    ×