The presentation outlines plans for a new campus-wide application called CS:Live. It will integrate news, social events, coursework, and discussions in one place for students and staff. Key features will include Twitter and timetable integration. It will be developed using PHP, MySQL, Apache, and jQuery with AJAX and JSON techniques. Considerations around scope, features, technical implementation, appearance, development process, privacy, security, reliability, and accessibility are discussed. The team's goal is to create a structured yet flexible application that meets these considerations and benefits the entire school community.
Itsa Km Initiatve Presentation To Forums And Task ForceITS America
The document discusses ITS America's Knowledge Management Initiative to capture and share knowledge related to IntelliDrive within the transportation community. The initiative will go through several phases including a knowledge audit survey, refining a taxonomy of IntelliDrive topics, collecting and organizing knowledge, and designing a web application for knowledge sharing. It aims to reduce barriers to sharing both explicit and tacit knowledge not found elsewhere and support members by providing an institutional memory.
The document contains lyrics from three Christmas songs: "Hit the Road Jack" which is about a man being told to leave by his lover, "Rudolph the Red-Nosed Reindeer" telling the story of Rudolph joining Santa's sleigh, and "Santa Claus is Coming to Town" describing Santa watching children to see who is naughty or nice.
Slide pendukung untuk mata kuliah Pemrograman Web 1 di Jurusan Teknik Informatika Universitas Pasundan Bandung.
Digunakan juga sebagai pendukung untuk video di channel youtube "WebProgrammingUNPAS"
https://www.youtube.com/channel/UCkXmLjEr95LVtGuIm3l2dPg
The document contains lyrics from three Christmas songs: "Hit the Road Jack" by Ray Charles, "Rudolph the Red-Nosed Reindeer", and "Santa Claus is Coming to Town". "Hit the Road Jack" has call-and-response lyrics telling someone to leave. "Rudolph the Red-Nosed Reindeer" tells the story of Rudolph joining Santa's sleigh. "Santa Claus is Coming to Town" warns that Santa sees if children are naughty or nice.
Principles of Students Engagement and Transfer of Learning for Online Training Nor Azida Azhari
The document outlines eight principles for engaging students and transferring learning in online courses. The principles emphasize learning over teaching, active learning through interactivities and discussion boards, learner thinking through problem solving, cooperative and collaborative activities, learner choice and decision making through non-linear navigation, providing personally relevant context through stories and simulations, consequence feedback by showing rather than telling, and supporting learner reflection through self-reflection and evaluation.
Itsa Km Initiatve Presentation To Forums And Task ForceITS America
The document discusses ITS America's Knowledge Management Initiative to capture and share knowledge related to IntelliDrive within the transportation community. The initiative will go through several phases including a knowledge audit survey, refining a taxonomy of IntelliDrive topics, collecting and organizing knowledge, and designing a web application for knowledge sharing. It aims to reduce barriers to sharing both explicit and tacit knowledge not found elsewhere and support members by providing an institutional memory.
The document contains lyrics from three Christmas songs: "Hit the Road Jack" which is about a man being told to leave by his lover, "Rudolph the Red-Nosed Reindeer" telling the story of Rudolph joining Santa's sleigh, and "Santa Claus is Coming to Town" describing Santa watching children to see who is naughty or nice.
Slide pendukung untuk mata kuliah Pemrograman Web 1 di Jurusan Teknik Informatika Universitas Pasundan Bandung.
Digunakan juga sebagai pendukung untuk video di channel youtube "WebProgrammingUNPAS"
https://www.youtube.com/channel/UCkXmLjEr95LVtGuIm3l2dPg
The document contains lyrics from three Christmas songs: "Hit the Road Jack" by Ray Charles, "Rudolph the Red-Nosed Reindeer", and "Santa Claus is Coming to Town". "Hit the Road Jack" has call-and-response lyrics telling someone to leave. "Rudolph the Red-Nosed Reindeer" tells the story of Rudolph joining Santa's sleigh. "Santa Claus is Coming to Town" warns that Santa sees if children are naughty or nice.
Principles of Students Engagement and Transfer of Learning for Online Training Nor Azida Azhari
The document outlines eight principles for engaging students and transferring learning in online courses. The principles emphasize learning over teaching, active learning through interactivities and discussion boards, learner thinking through problem solving, cooperative and collaborative activities, learner choice and decision making through non-linear navigation, providing personally relevant context through stories and simulations, consequence feedback by showing rather than telling, and supporting learner reflection through self-reflection and evaluation.
The document contains lyrics from three Christmas songs: "Hit the Road Jack" which is about a man being told to leave by his partner, "Rudolph the Red-Nosed Reindeer" telling the story of Rudolph joining Santa's sleigh, and "Santa Claus is Coming to Town" describing Santa watching children to see who is naughty or nice.
El Tribunal Constitucional sostiene que las garantías que comprende el derecho al debido proceso no solo deben observarse en el ámbito jurisdiccional, sino que, de igual modo, deben ser contempladas en las instancias administrativa sancionatoria y parlamentaria.
Lembar pengkajian asuhan keperawatan pada pasien dengan diagnosa medis tertentu di ruang rawat inap rumah sakit tertentu. Lembar ini berisi format pengumpulan data umum keperawatan yang meliputi identitas pasien dan penanggung jawab, diagnosa medis, keluhan utama, riwayat kesehatan, riwayat keperawatan pasien, pemeriksaan fisik, dan tindakan serta terapi.
This presentation describes how ExactTarget uses a particular wiki (MindTouch) to automate workflow processes, as well as generate documentation for the web services API to their application. Automated workflows include assigning writing, identifying articles that have had content contributed by people outside the department requiring further review, and automating the publication of new and changed content from the development wiki to the delivery wiki. Other automated processes include producing “white labeled” documentation from branded documentation.
In the ExactTarget application, much functionality is provided through web services. To make good use of the API calls, clients must understand the objects, methods, parameters, and properties exposed by the WSDL. In the past, this documentation was developed manually, and was often out of date, incorrect, or both.
Through collaboration with the development group, code was developed that builds documentation in the MindTouch wiki that documents the relationships between the objects, methods, parameters, and properties. The code also pulls definitions of these entities from the web service. While the generated pages do not fully document the API, they provide a very strong starting point and serve as a template for the SMEs who fill in the gaps that cannot be determined by parsing the web service. By identifying all of the web service entities, the auto-generated pages ensure that important elements of the application are not missed.
With each release of a new API, the code can generate an up-to-date list of the entities and update the documentation templates without disturbing information that has been manually entered by SMEs. This process allows the API documentation to be much more complete, accurate, and timely for our clients.
Ty Howard is an experienced IT project management instructor and consultant with over 15 years of experience. He holds a PMP certification and has established several project management offices. He teaches at the university level and speaks at large conferences. His educational background includes degrees in sociology, public administration, and instructional technology. He believes in interactive, motivating education. His company, Biz-Nova Consulting, provides IT project management training.
Building a healthy data ecosystem around Kafka and Hadoop: Lessons learned at...Yael Garten
2017 StrataHadoop SJC conference talk. https://conferences.oreilly.com/strata/strata-ca/public/schedule/detail/56047
Description:
So, you finally have a data ecosystem with Kafka and Hadoop both deployed and operating correctly at scale. Congratulations. Are you done? Far from it.
As the birthplace of Kafka and an early adopter of Hadoop, LinkedIn has 13 years of combined experience using Kafka and Hadoop at scale to run a data-driven company. Both Kafka and Hadoop are flexible, scalable infrastructure pieces, but using these technologies without a clear idea of what the higher-level data ecosystem should be is perilous. Shirshanka Das and Yael Garten share best practices around data models and formats, choosing the right level of granularity of Kafka topics and Hadoop tables, and moving data efficiently and correctly between Kafka and Hadoop and explore a data abstraction layer, Dali, that can help you to process data seamlessly across Kafka and Hadoop.
Beyond pure technology, Shirshanka and Yael outline the three components of a great data culture and ecosystem and explain how to create maintainable data contracts between data producers and data consumers (like data scientists and data analysts) and how to standardize data effectively in a growing organization to enable (and not slow down) innovation and agility. They then look to the future, envisioning a world where you can successfully deploy a data abstraction of views on Hadoop data, like a data API as a protective and enabling shield. Along the way, Shirshanka and Yael discuss observations on how to enable teams to be good data citizens in producing, consuming, and owning datasets and offer an overview of LinkedIn’s governance model: the tools, process and teams that ensure that its data ecosystem can handle change and sustain #DataScienceHappiness.
Strata 2017 (San Jose): Building a healthy data ecosystem around Kafka and Ha...Shirshanka Das
So, you finally have a data ecosystem with Kafka and Hadoop both deployed and operating correctly at scale. Congratulations. Are you done? Far from it.
As the birthplace of Kafka and an early adopter of Hadoop, LinkedIn has 13 years of combined experience using Kafka and Hadoop at scale to run a data-driven company. Both Kafka and Hadoop are flexible, scalable infrastructure pieces, but using these technologies without a clear idea of what the higher-level data ecosystem should be is perilous. Shirshanka Das and Yael Garten share best practices around data models and formats, choosing the right level of granularity of Kafka topics and Hadoop tables, and moving data efficiently and correctly between Kafka and Hadoop and explore a data abstraction layer, Dali, that can help you to process data seamlessly across Kafka and Hadoop.
Beyond pure technology, Shirshanka and Yael outline the three components of a great data culture and ecosystem and explain how to create maintainable data contracts between data producers and data consumers (like data scientists and data analysts) and how to standardize data effectively in a growing organization to enable (and not slow down) innovation and agility. They then look to the future, envisioning a world where you can successfully deploy a data abstraction of views on Hadoop data, like a data API as a protective and enabling shield. Along the way, Shirshanka and Yael discuss observations on how to enable teams to be good data citizens in producing, consuming, and owning datasets and offer an overview of LinkedIn’s governance model: the tools, process and teams that ensure that its data ecosystem can handle change and sustain #datasciencehappiness.
The document describes the first phase of developing the OnScience portal, which involved designing the architecture and schematics. Key points:
- The team split into groups based on skills to work on different phases. Phase 1 focused on architecture.
- Modules like a researcher rating system were planned to make the portal more useful than existing sites. The rating system considered factors like publications.
- Developing a robust e-commerce platform was a challenge to balance user and business interests.
- A dummy platform tested the rating system algorithm by having users create profiles before the public launch.
- The main page layout was designed using interface tools to optimize the user experience. PHP and JavaScript were selected for the technical
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
Directi Information Session on Campus @ IITDDirecti Group
Directi is a technology company that started in 1998 and has since grown significantly. It has over 550 employees across offices in India, UAE, China, and the US. Some of its businesses include web hosting, email hosting, payment processing, and social media platforms. Life at Directi includes working with smart colleagues, challenging projects, an open culture, flexibility, and work-life balance. The company seeks software developers and offers roles from individual contributors to management. A variety of programming languages, tools, and technologies are used.
Real time insights for better products, customer experience and resilient pla...Balvinder Hira
Businesses are building digital platforms with modern architecture principles like domain driven design, microservice based, and event-driven. These platforms are getting ever so modular, flexible and complex.
While they are built with architecture principles like - loose coupling, individually scaling, plug-and-play components; regulations and security considerations on data - complexity leads to many unknown and grey areas in the entire architecture. Details on how the different components of this complex architecture interact with each other are lost. Generating insights becomes multi-teams, multi-staged activity and hence multi-days activity.
Multiple users and stakeholders of the platform want different and timely insights to take both corrective and preventive actions.Business teams want to know how business is doing in every corner of the country near real time at a zipcode granularity. Tech teams want to correlate flow changes with system health including that of downstream stability as it happens.Knowing these details also helps in providing the feedback to the platform itself, to make it more efficient and also to the underlying business process.
In this talk we intend to share how we made all the business and technical insights of a complicated platform available in realtime with limited incremental effort and constant validation of the ideas and slices with business teams. Since the client was a Banking client, we will also touch base handling of financial data in a secure way and still enabling insights for a large group of stakeholders.
We kept the self-service aspect at the center of our solution - to accommodate increasing components in the source platform, evolving requirements, even to support new platforms altogether. Configurability and Scalability were key here, it was important that all the data that was collected from the source platform was discoverable and presentable. This also led to evolving the solution in lines of domain data products, where the data is generated and consumed by those who understand it the best.
This DevOps CTO Masterclass covers DevOps tools, methodologies, and principles. The presentation introduces DevOps and its history, then discusses when DevOps is needed through a case study of a company that implemented DevOps to improve their development process. The remainder of the presentation covers DevOps practices for various stages including planning, coding, building, testing, deploying, operating, and monitoring. Key takeaways are to plan and communicate, automate processes, and continuously improve.
The document provides an overview of open source software, its history and uses in libraries. It discusses evaluating open source solutions and factors to consider such as community support, total cost of ownership, and technical requirements. Resources for finding and evaluating open source software are also listed.
Atos Consulting World Class IT Perspectives Technology Trendsguesta9bf56
The document discusses technology trends and provides a suggested approach for trend watching. It outlines an ICT scan method involving analyzing organizational impacts of trends, identifying opportunities, and developing a portfolio to guide technology investments. Examples of trends include open source software, wireless networks, enterprise applications, and technologies enabling mobility, cost cuts and new business opportunities. The approach helps organizations identify relevant trends and develop a structured innovation process.
Adam wrote a letter to his mother describing a meeting he attended but found confusing, with unfamiliar terminology being used. The document then summarizes several emerging technical standards and proposals around learning technologies, including the Enterprise Web Services specification, Learning Tools Interoperability, Common Cartridge, and the Digital Interactive Content Exchange. It calls for further validation, exploration, and simplification of these draft standards.
IEEE CCNC 2011: Kalman Graffi - LifeSocial.KOM: A Secure and P2P-based Soluti...Kalman Graffi
The phenomenon of online social networks reaches millions of users in the Internet nowadays. In these, users present themselves, their interests and their social links which they use to interact with other users. We present in this paper LifeSocial.KOM, a p2p-based platform for secure online social networks which provides the functionality of common online social networks in a totally distributed and secure manner. It is plugin-based, thus extendible in its functionality, providing secure communication and access-controlled storage as well as monitored quality of service, addressing the needs of both, users and system providers. The platform operates solely on the resources of the users, eliminating the concentration of crucial operational costs for one provider. In a testbed evaluation, we show the feasibility of the approach and point out the potential of the p2p paradigm in the field of online social networks.
This document provides an overview of an internship project completed by three interns at HCL Infosystems. It details the training received on the Trend Micro IWSS security suite, the timeline of the 6-week project, requirements for an internal information portal, and descriptions of the key pages developed. An intranet website was created allowing all visitors to view notices, logged in users to post forums and add comments, and administrators to add/delete content and users. Tables were created in a MySQL database to store user, notice, post and comment data. The project aimed to enhance the existing user profile portal.
The document discusses how Web 2.0 technologies like blogs, wikis, RSS, and user-generated content have changed how people use and share information online. It argues that services should embrace these new technologies and practices, such as allowing external content to be embedded, trusting users, and developing lightweight and distributed systems rather than trying to compete directly with large commercial providers.
The document contains lyrics from three Christmas songs: "Hit the Road Jack" which is about a man being told to leave by his partner, "Rudolph the Red-Nosed Reindeer" telling the story of Rudolph joining Santa's sleigh, and "Santa Claus is Coming to Town" describing Santa watching children to see who is naughty or nice.
El Tribunal Constitucional sostiene que las garantías que comprende el derecho al debido proceso no solo deben observarse en el ámbito jurisdiccional, sino que, de igual modo, deben ser contempladas en las instancias administrativa sancionatoria y parlamentaria.
Lembar pengkajian asuhan keperawatan pada pasien dengan diagnosa medis tertentu di ruang rawat inap rumah sakit tertentu. Lembar ini berisi format pengumpulan data umum keperawatan yang meliputi identitas pasien dan penanggung jawab, diagnosa medis, keluhan utama, riwayat kesehatan, riwayat keperawatan pasien, pemeriksaan fisik, dan tindakan serta terapi.
This presentation describes how ExactTarget uses a particular wiki (MindTouch) to automate workflow processes, as well as generate documentation for the web services API to their application. Automated workflows include assigning writing, identifying articles that have had content contributed by people outside the department requiring further review, and automating the publication of new and changed content from the development wiki to the delivery wiki. Other automated processes include producing “white labeled” documentation from branded documentation.
In the ExactTarget application, much functionality is provided through web services. To make good use of the API calls, clients must understand the objects, methods, parameters, and properties exposed by the WSDL. In the past, this documentation was developed manually, and was often out of date, incorrect, or both.
Through collaboration with the development group, code was developed that builds documentation in the MindTouch wiki that documents the relationships between the objects, methods, parameters, and properties. The code also pulls definitions of these entities from the web service. While the generated pages do not fully document the API, they provide a very strong starting point and serve as a template for the SMEs who fill in the gaps that cannot be determined by parsing the web service. By identifying all of the web service entities, the auto-generated pages ensure that important elements of the application are not missed.
With each release of a new API, the code can generate an up-to-date list of the entities and update the documentation templates without disturbing information that has been manually entered by SMEs. This process allows the API documentation to be much more complete, accurate, and timely for our clients.
Ty Howard is an experienced IT project management instructor and consultant with over 15 years of experience. He holds a PMP certification and has established several project management offices. He teaches at the university level and speaks at large conferences. His educational background includes degrees in sociology, public administration, and instructional technology. He believes in interactive, motivating education. His company, Biz-Nova Consulting, provides IT project management training.
Building a healthy data ecosystem around Kafka and Hadoop: Lessons learned at...Yael Garten
2017 StrataHadoop SJC conference talk. https://conferences.oreilly.com/strata/strata-ca/public/schedule/detail/56047
Description:
So, you finally have a data ecosystem with Kafka and Hadoop both deployed and operating correctly at scale. Congratulations. Are you done? Far from it.
As the birthplace of Kafka and an early adopter of Hadoop, LinkedIn has 13 years of combined experience using Kafka and Hadoop at scale to run a data-driven company. Both Kafka and Hadoop are flexible, scalable infrastructure pieces, but using these technologies without a clear idea of what the higher-level data ecosystem should be is perilous. Shirshanka Das and Yael Garten share best practices around data models and formats, choosing the right level of granularity of Kafka topics and Hadoop tables, and moving data efficiently and correctly between Kafka and Hadoop and explore a data abstraction layer, Dali, that can help you to process data seamlessly across Kafka and Hadoop.
Beyond pure technology, Shirshanka and Yael outline the three components of a great data culture and ecosystem and explain how to create maintainable data contracts between data producers and data consumers (like data scientists and data analysts) and how to standardize data effectively in a growing organization to enable (and not slow down) innovation and agility. They then look to the future, envisioning a world where you can successfully deploy a data abstraction of views on Hadoop data, like a data API as a protective and enabling shield. Along the way, Shirshanka and Yael discuss observations on how to enable teams to be good data citizens in producing, consuming, and owning datasets and offer an overview of LinkedIn’s governance model: the tools, process and teams that ensure that its data ecosystem can handle change and sustain #DataScienceHappiness.
Strata 2017 (San Jose): Building a healthy data ecosystem around Kafka and Ha...Shirshanka Das
So, you finally have a data ecosystem with Kafka and Hadoop both deployed and operating correctly at scale. Congratulations. Are you done? Far from it.
As the birthplace of Kafka and an early adopter of Hadoop, LinkedIn has 13 years of combined experience using Kafka and Hadoop at scale to run a data-driven company. Both Kafka and Hadoop are flexible, scalable infrastructure pieces, but using these technologies without a clear idea of what the higher-level data ecosystem should be is perilous. Shirshanka Das and Yael Garten share best practices around data models and formats, choosing the right level of granularity of Kafka topics and Hadoop tables, and moving data efficiently and correctly between Kafka and Hadoop and explore a data abstraction layer, Dali, that can help you to process data seamlessly across Kafka and Hadoop.
Beyond pure technology, Shirshanka and Yael outline the three components of a great data culture and ecosystem and explain how to create maintainable data contracts between data producers and data consumers (like data scientists and data analysts) and how to standardize data effectively in a growing organization to enable (and not slow down) innovation and agility. They then look to the future, envisioning a world where you can successfully deploy a data abstraction of views on Hadoop data, like a data API as a protective and enabling shield. Along the way, Shirshanka and Yael discuss observations on how to enable teams to be good data citizens in producing, consuming, and owning datasets and offer an overview of LinkedIn’s governance model: the tools, process and teams that ensure that its data ecosystem can handle change and sustain #datasciencehappiness.
The document describes the first phase of developing the OnScience portal, which involved designing the architecture and schematics. Key points:
- The team split into groups based on skills to work on different phases. Phase 1 focused on architecture.
- Modules like a researcher rating system were planned to make the portal more useful than existing sites. The rating system considered factors like publications.
- Developing a robust e-commerce platform was a challenge to balance user and business interests.
- A dummy platform tested the rating system algorithm by having users create profiles before the public launch.
- The main page layout was designed using interface tools to optimize the user experience. PHP and JavaScript were selected for the technical
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
Directi Information Session on Campus @ IITDDirecti Group
Directi is a technology company that started in 1998 and has since grown significantly. It has over 550 employees across offices in India, UAE, China, and the US. Some of its businesses include web hosting, email hosting, payment processing, and social media platforms. Life at Directi includes working with smart colleagues, challenging projects, an open culture, flexibility, and work-life balance. The company seeks software developers and offers roles from individual contributors to management. A variety of programming languages, tools, and technologies are used.
Real time insights for better products, customer experience and resilient pla...Balvinder Hira
Businesses are building digital platforms with modern architecture principles like domain driven design, microservice based, and event-driven. These platforms are getting ever so modular, flexible and complex.
While they are built with architecture principles like - loose coupling, individually scaling, plug-and-play components; regulations and security considerations on data - complexity leads to many unknown and grey areas in the entire architecture. Details on how the different components of this complex architecture interact with each other are lost. Generating insights becomes multi-teams, multi-staged activity and hence multi-days activity.
Multiple users and stakeholders of the platform want different and timely insights to take both corrective and preventive actions.Business teams want to know how business is doing in every corner of the country near real time at a zipcode granularity. Tech teams want to correlate flow changes with system health including that of downstream stability as it happens.Knowing these details also helps in providing the feedback to the platform itself, to make it more efficient and also to the underlying business process.
In this talk we intend to share how we made all the business and technical insights of a complicated platform available in realtime with limited incremental effort and constant validation of the ideas and slices with business teams. Since the client was a Banking client, we will also touch base handling of financial data in a secure way and still enabling insights for a large group of stakeholders.
We kept the self-service aspect at the center of our solution - to accommodate increasing components in the source platform, evolving requirements, even to support new platforms altogether. Configurability and Scalability were key here, it was important that all the data that was collected from the source platform was discoverable and presentable. This also led to evolving the solution in lines of domain data products, where the data is generated and consumed by those who understand it the best.
This DevOps CTO Masterclass covers DevOps tools, methodologies, and principles. The presentation introduces DevOps and its history, then discusses when DevOps is needed through a case study of a company that implemented DevOps to improve their development process. The remainder of the presentation covers DevOps practices for various stages including planning, coding, building, testing, deploying, operating, and monitoring. Key takeaways are to plan and communicate, automate processes, and continuously improve.
The document provides an overview of open source software, its history and uses in libraries. It discusses evaluating open source solutions and factors to consider such as community support, total cost of ownership, and technical requirements. Resources for finding and evaluating open source software are also listed.
Atos Consulting World Class IT Perspectives Technology Trendsguesta9bf56
The document discusses technology trends and provides a suggested approach for trend watching. It outlines an ICT scan method involving analyzing organizational impacts of trends, identifying opportunities, and developing a portfolio to guide technology investments. Examples of trends include open source software, wireless networks, enterprise applications, and technologies enabling mobility, cost cuts and new business opportunities. The approach helps organizations identify relevant trends and develop a structured innovation process.
Adam wrote a letter to his mother describing a meeting he attended but found confusing, with unfamiliar terminology being used. The document then summarizes several emerging technical standards and proposals around learning technologies, including the Enterprise Web Services specification, Learning Tools Interoperability, Common Cartridge, and the Digital Interactive Content Exchange. It calls for further validation, exploration, and simplification of these draft standards.
IEEE CCNC 2011: Kalman Graffi - LifeSocial.KOM: A Secure and P2P-based Soluti...Kalman Graffi
The phenomenon of online social networks reaches millions of users in the Internet nowadays. In these, users present themselves, their interests and their social links which they use to interact with other users. We present in this paper LifeSocial.KOM, a p2p-based platform for secure online social networks which provides the functionality of common online social networks in a totally distributed and secure manner. It is plugin-based, thus extendible in its functionality, providing secure communication and access-controlled storage as well as monitored quality of service, addressing the needs of both, users and system providers. The platform operates solely on the resources of the users, eliminating the concentration of crucial operational costs for one provider. In a testbed evaluation, we show the feasibility of the approach and point out the potential of the p2p paradigm in the field of online social networks.
This document provides an overview of an internship project completed by three interns at HCL Infosystems. It details the training received on the Trend Micro IWSS security suite, the timeline of the 6-week project, requirements for an internal information portal, and descriptions of the key pages developed. An intranet website was created allowing all visitors to view notices, logged in users to post forums and add comments, and administrators to add/delete content and users. Tables were created in a MySQL database to store user, notice, post and comment data. The project aimed to enhance the existing user profile portal.
The document discusses how Web 2.0 technologies like blogs, wikis, RSS, and user-generated content have changed how people use and share information online. It argues that services should embrace these new technologies and practices, such as allowing external content to be embedded, trusting users, and developing lightweight and distributed systems rather than trying to compete directly with large commercial providers.
The document discusses how Web 2.0 technologies like blogs, wikis, RSS, and user-generated content have changed how people use and share information online. It argues that services like Intute were pioneers in these approaches before the term "Web 2.0" was coined. Looking ahead, it suggests institutions embrace new models where commercial services host content and applications, and find ways to enhance rather than compete with popular third-party sites.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis ut imperdiet enim. Donec lectus eros, luctus quis dapibus ac, posuere sed dolor. Sed id orci at sapien hendrerit adipiscing et at enim. Nam eu adipiscing mauris. Nulla aliquam nisl nec risus viverra elementum. Maecenas facilisis.
KB Seminars: Working with Technology - Platforms; 10/13MDIF
This document provides an overview and agenda for a technology seminar discussing technology platforms and decision criteria. It will cover the purpose of platforms, the planning and decision making process, and do a comparison of major open source platforms. The document defines technology platforms and outlines various decision criteria to consider, including technical requirements, business factors like costs, and open source versus proprietary software pros and cons. Useful links are also provided.
Zapbuild Technology, is an Enterprise Business Solutions Provider. We can conceive, design, develop and implement an enterprise application of any magnitude and on any platform. We are globally accepted, as evidenced by the faith reposed by multiple clients.
2. Outline Scope Facilities and features Technical considerations Development process Ethical and Legal considerations Topics to be covered in today’s presentation…
3. Scope Who? Students & Staff at the School of Computer Science. Societies and Social Event Organisers. Where? From the school computers. From home or halls of residence. Potentially from a mobile device. Who will use CS:Live? Where?
4. Features What will CS:Live offer to users? “A place to see live updates for news, social events, coursework and discussions.”
5. Social? Coursework? Discussions? News? Features What will CS:Live offer to users? Twitter Integration Timetable Integration Discussion / Moodle feature RSS
6. Technical Considerations How will CS:Live work? Modules Twitter Interrogator Moodle Timetables Discussions Feeds Platform
21. Development Process Expansion of the concept for other departments. Compatibility with mobile devices. Pushing the feed to other services via RSS/Atom. Do we have any future plans for CS:Live?
22. Ethical Considerations Some further considerations: Privacy: Storing personal data. Revealing activities to others. Security: Potentially holding passwords for services like Twitter. Displaying content from other sites. Reliability: People depend on it’s reliability – I.e. Timetable Accessibility: Everyone should be able to take advantage of CS:Live. How will we make CS:Live fair, safe and secure?
23. In Conclusion… Definite scope and target audience Agreed feature set Technical background research Consideration of final appearance Structured, flexible time management Consideration for ethical, accessibility and legal issues How would we summarise our project & planning?
24. Thank You We will now take questions. Team CS:Live (A4) MilenKindekov, Stephan Fifield, Cristina Finta, Damien Walsh, Joe Jefford Referenced URLs: PHPUnit (http://www.phpunit.de) PHP (http://www.php.net) PHPDocumentor (http://www.phpdoc.org) MySQL® (http://www.mysql.com) Apache (http://www.apache.org) jQuery (http://www.jquery.org) cURL (http://curl.haxx.se/) Made on a Mac
Editor's Notes
Introduce group
Today we’re going to talk about: - The scope of the project, who we want to target with our application, where we think users will access it from and how they will use it.<advance> - The facilities it offers to users, what each part of our application does and why we think there is a demand for them.<advance> - The technical considerations associated with our project, including the tools we want to make use of, both in development and when the application is running in production.<advance> - The development process, the guidelines and time limits we’re going to lay down.<advance> - And finally, the ethical and legal considerations of our project.
Expect our application to be used by students and staff at the School of Computer Science.We will also make arrangements so that event Organisers, society managers and the like can log in and use our platform to publish information about their events.<advance>Access will most likely be from the school’s computers, or from home or halls of residence.We also intend to make the website accessible from newer mobile devices such as the iPhone and iPad to take advantage of the extensive WiFi coverage around the university.
What we are aiming to provide is a straight forward place for computer science students to be able to catch up with all important news and events in the Computer Science school. If that turns out successful, even the whole university in future.Since most of the current systems are independent by themselves, information could be spread out all among them. What we are hoping to achieve is to organize everything students access in one place while at the same time combine it with neat and useful features like timetables, deadlines, announcements and other study-related services.In our opinion, the best way to achieve this is to provide all the information we gather in the form of a live news feed accessible by all the members of our website.
During our initial research and discussion phases, we thought of several things we all access regularly: Social content, such as Facebook and Twitter Course related content, specifically timetables, assignment deadlines. Direct communication features such as Forums and mailing lists In addition, news content like BBC and newspaper websites.After discussing the possibilites for bringing these functions together, without attempting to replace them in some wayWe decided that these features would be a good start:<advance>Twitter integration – pulling down tweets that a user has subscribed to in some way.<advance>Timetable integration – downloading and arranging the Computer Science timetables as a set of notifications for imminent classes or events.<advance>Discussions – A way of relating discussions on other platforms (I.e. Moodle) to events and notifications in a news feed.<advance>And to satisfy the requirement for news content, RSS (that’s “Really Simple Syndication”) – allowing a user to subscribe to their choice of news provider.
How will the system actually work?We decided that a modular system would work well, as well as improving the workflow in the development process (more about that shortly)The intention is to have the individual modules (I.e. Twitter, Moodle and Timetable) running independently, collecting information from their respective sources.The interrogator will run at regular intervals access the module’s individual stores and generate the appropriate notifications for users as rows in a database table.The users will see the notifications appear in the feed view, where they are subject to filtering predicates applied using the interface.
With regards to the actual development and implementation of the project, we have decided on several software packages that will be used.During development, we will be using standard text editors, as well as working on the graphical components of the design and interface in Photoshop and The GNU Image Modification Program.<advance>Once the application development is complete and the project is in production, we will be running the code under PHP 5 + Apache. The database will be handled by MySQL. The implantation will also require the use of cURL / libcurl to access remote resources (For example, Moodle discussion pages and the School of Computer Science timetables).On the client side, our application will of course be using JavaScript, alongside the hugely popular jQuery library. <advance>Other than these pieces of software, we will also require use of the Twitter API. This will allow our application to download Tweets en-masse and apply more advanced predicate searches to tweets to cater for the specific configuration needs of individual users.
We have also agreed on several other methods of improving the process of developing our application with the intent of leaving uswith a more solid, maintainable and expandable application that can be improved in the years to come.One of the ways we intend to do this is through maintaining strict coding standards throughout our project. This will ensure that there isConsistency between work produced by each team member.Version control will also play a major role in our project, helping us manage the developmental stages and providing a log of who changedWhat in the application. We will be expecting each team member to provide accurate and sufficiently detailed notes when committing changes.For the parts of the project that use PHP Classes, we hope to make use of PHPDocumentor. This will allow us to easily produce a detailedSet of documentation to explain the interfaces by which the individual modules (I.e. Timetable, Moodle and Twitter) communicate with the feedSystem. This documentation will also be extremely helpful during development so that team members can refer to the documentation whenWriting the later parts.<advance>With regards to testing, an option we have discussed is PHPUnit. This is a great standard for testing PHP and is very widely used. It allows Powerful assertive Unit testing that builds up functionality in stages.In terms of output, after considering the browsers used most commonly at the School (Firefox under Linux and Internet Explorer 7 under Windows XP),We have decided that the pages will be visually-tested in these two browsers to ensure they work properly.<advance>We have agreed to ensure all our PHP produces XHTML 1.0 Strict Validated output. All pages will be validated at major stages and extensivelyDuring the debugging phase to ensure our application works in as many browsers as possible.
Our greatest challenge when creating the basic design for our website was that we wanted for it to be both simplistic in general and at the same time offer all the functionality needed by our users.For the colours we decided to match the University's emblem colours. We used a tinting purple, which contrasts with the white that takes up all the other parts of the website.Our layout is split up into three main parts, for which I will go into more detail nowFirst of all we have the Banner and the Menu section. Our banner includes the logo of our website along with the logo of the University providing a link to it's main website. The menu includes some of the links for our other website pages, such as the Discussion or “Talks” page, an “Alerts” page and the customize “Profile” page.<advance>The main body section of the website has a main purpose is to present all the gathered information from the feeds in a structured and readable way. Each posted feed, whether it would be an announcement, a new discussion topic or an upcoming deadline, has an appropriate picture, which accompanies it. It tells you the nature of the posted information and what source was used to retrieve it.<advance>The side menu includes a couple of widget-like user-orientated features. The Filters module allows users to filter out the information received through some specific feeds. If a user decides to remove a certain feed, he flips the switch, which turns from green to red – meaning the feed has been turned off. Users can also add more feeds to the list, using the “Add More Feeds” button at the bottom. This adds a certain flexibility to the general design concept.<advance>The last feature, which is again part of the side menu is the mini-profile viewer. It offers users quick access to their profile customization page and it also allows them to manage their feeds (remove unnecessary feeds) in a more complex way. Each user has his own profile picture and their name is shown as part of the top menu.
This is another concept we developed for the discussions module. We think that this has a real sense of uniqueness about it and has an innovative andIntuitive layout.The discussions center around a topic in the middle of the screen. Clicking on a bubble in the spider diagram moves it to the centre, showing the relatedTopics (in a heirarchy style structure) around it.The icons underneath display some information about the particular topic or category. The star signifies the number of new, unread topics. The magnifying glass shows the number of users currently reading that topic.Although this design could be difficult to implement, we think that the uniqueness innovative nature of the concept make the extra effort and time allocationDuring development worth it.AND With regards to development time, Cristina will explain our planning….
Time PlanningFirst thing about planning: how much time do we have?Basically the time allocated to the project is 9 weeks. So we divided these weeks as shown below. The plan is divided in 4 phases, 3 of them will be dedicated to the development of the website and the last one will be the actual presentation.We agreed that everyone in the group should take these timing arrangements very seriously and treat the deadlines with the same importance as University deadlines.This way we will not fall back on the plan and we also agreed that every member should be sincere about his/her progress. As you can see in the picture, the plan is thought in such a manner that we have an error time for every activity, if something goes wrong we will have time to make it right.In this way, we will not be running out of it which is a serious problem in this kind of projects.Every member of the group has something assigned to do.Everyone has to contribute to the development of the site.The important thing is that everybody has one thing at a time to do, so the phases will not overlap.This should make things a lot easier for every member because we can actually see the progress and when necessary we can decide how much time that person needs if something takes longer than planned.In this case, we considered that if there is someone who finishes the assigned work, that person should help anyone who is having a hard time.This way we ensure that we are using our time properly and that everything goes as expected.Besides the phases shown, we agreed that everyone should do research over the winter break.Things would go a lot easier if we know exactly what are we supposed to do, before the actual coding phase. The first phase includes a week and it is the period in which we plan the project. In this phase we will agree on what everyone is supposed to do, by when and how much time we allocate for every step in the next phases. Phase two is the coding and programming part. In this part we will already know from the phase before which part is whose. This phase has 3 weeks assigned to it. Also, we thought that it should take longer to do, because we thought about the other assignments from the University that we will still have, so we added an error time of a week. Phase three is the testing and debugging part. This is two weeks long because it is safer to assign a longer period for this part, than have a shorter one and then running out of time.We know that this part will be harder to do, so we decided to take our time to work through it. The last phase is presentation. In this phase the website will be shown and a demo for it will be provided. We strongly believe that we are going to stick to the plan and that we will not have any delays other than the ones we included.
In addition to our planning and time management, we did consider a few additional possibilities once the planned project time is over.< advance >For now, the page will be implemented only for the School of Computer Science. After observing the behavior and receiving feedback from students who are users, we hope that we will be able to link this website to other intranet websites, so computer scientists will be able to meet students from different courses.< advance > A certain improvement will be to make the website compatible with mobile phones. Our website is meant to make students lifes easier and so we think that it will be a really good idea for students to access all the information they need anytime, anywhere.One possibility would be the production of a dedicated app for CS:Live for Android / iOS devices.< advance >Another possibility we considered was to allow the user to access their notifications in a hypermedia format such as RSS or Atom.This would allow them to access their feed using another client other than our web frontend such as an RSS / Atom aggregator on an iPhone – providing more customisability for the user.
We have identified several key issues with regards to ethical and legal considerations.<advance>Privacy is a very important issue with a project like this one involving many users communicating. CS:Live will be storing some personal information, a picture, name, course assignment, tutorial group. This information might not seem too important for one person, but a database with 200+ individuals could be considered a serious privacy issue. We need to make sure the site is secure to protect the privacy of every individual that uses it. We will write a privacy policy towards the end of development that makes the measures we have taken to do this clear to our users.Privacy is also an issue in terms of the kind of notifications CS:Live will publish.Information about where specific individuals are for example is potentially open to abuse. This may need to be addressed later in development.<advance>Security is also a major consideration for our project.Because it is an integration service, aggregating information from different services, it may be necessary for CS:Live to store service passwords for sites like Twitter.This makes security of our data more important than ever.The site will also be downloading and displaying content from other sites, I.e. RSS feeds – this needs to be filtered and made safe for display in a browser to avoid incidents involving Cross-Site Scripting (XSS) Exploits.<advance>Another important consideration is Reliability of our service.People may become reliant on the accuracy of the notifications CS:Live provides. For example, timetable and deadline notifications could have serious personal & academic implications for individuals if they are incorrect or inaccurate.This needs to be addressed and validated before publishing notifications.<advance>Another important issue is accessibility.We need to take in to account that some users may have physical disabilities that prevent them from using some of the more traditional interaction methods.However, with new web technologies, helping these users get the most from our content is easier than ever.With a few additional HTML attributes, we can add speech meta data to let programs like Jaws read our content accurately and clearly to for example visually impaired users.
In conclusion, we are confident our project has excellent potential. We have….<advance>A strongly defined target audience and scope, we know exactly what we want to achieve and have laid down appropriate bounds for development.<advance>We also have an agreed set of features – the ways in which we will achieve our targets of improving easy access to information critical to CS students.<advance>We have conducted technical background research to decide exactly the tools and services we will use.<advance>We’ve considered the potential final appearance of our application frontend, and produced mock up designs to show our concepts.<advance>We have a solid plan for time management during development. While the deadlines for individual parts of the project are present, we have taken into consideration the requirement for extended amounts of time beingRequired for specific parts of the project.<advance>we have discussed the ethical issues pertaining to our project as well as potential privacy and security issues.And finally, we have considered accessibility for as many users as possible – giving everyone a chance to experience CS:Live.
Thank you for your time,We hope you’ve enjoyed our presentation!