Proceedings of the 24th International Conference on Web3D Technology (Web3D), 2019.
Social media in virtual reality is in a high-growth market segment with influential products and services in virtual tourism, remote education, and business meetings. Nevertheless, previous systems have never achieved an online platform which renders a 6DoF mirrored world with geotagged social media in real time. In this paper, we introduce the technical detail behind Geollery.com which reconstructs a mirrored world at two levels of detail. Given a pair of latitude and longitude coordinates, our pipeline streams and caches depth maps, street view panoramas, and building polygons from Google Maps and OpenStreetMap APIs. At a fine level of detail for close-up views, we render textured meshes using adjacent local street views and depth maps. When viewed from afar, we apply projection mappings to 3D geometries extruded from building polygons for a coarse level of detail. In contrast to teleportation, our system allows users to virtually walk through the mirrored world at the street level. Our system integrates geotagged social media from both internal users and external sources such as Twitter, Yelp, and Flicker. We validate our real-time strategies of Geollery.com on various platforms including mobile phones, workstations, and head-mounted displays.
Incredible presentation of the best examples of Creativity in Internet. I would desribe it as Creativity 2.0 – creative and inspirational ideas and web projects from recent times, featuring sections on visualisation. The author is Tom Uglow - a creative director for Google & YouTube in Europe.
Geollery: A Mixed Reality Social Media PlatformRuofei Du
We present Geollery, an interactive mixed reality social media platform for creating, sharing, and exploring geotagged information. Geollery introduces a real-time pipeline to progressively render an interactive mirrored world with three-dimensional (3D) buildings, internal user-generated content, and external geotagged social media. This mirrored world allows users to see, chat, and collaborate with remote participants with the same spatial context in an immersive virtual environment. We describe the system architecture of Geollery, its key interactive capabilities, and our design decisions. Finally, we conduct a user study with 20 participants to qualitatively compare Geollery with another social media system, Social Street View. Based on the participants' responses, we discuss the benefits and drawbacks of each system and derive key insights for designing an interactive mirrored world with geotagged social media. User feedback from our study reveals several use cases for Geollery including travel planning, virtual meetings, and family gathering.
Social Street View: Blending Immersive Street Views with Geo-tagged Social MediaRuofei Du
Paper and videos: http://www.socialstreetview.com
This paper presents an immersive geo-spatial social media system for virtual and augmented reality environments. With the rapid growth of photo-sharing social media sites such as Flickr, Pinterest, and Instagram, geo-tagged photographs are now ubiquitous. However, the current systems for their navigation are unsatisfyingly one- or two-dimensional. In this paper, we present our prototype system, Social Street View, which renders the geo-tagged social media in its natural geo-spatial context provided by immersive maps, such as Google Street View. This paper presents new algorithms for fusing and laying out the social media in an aesthetically pleasing manner with geospatial renderings, validates them with respect to visual saliency metrics, suggests spatio-temporal filters, and presents a system architecture that is able to stream geo-tagged social media and render it across a range of display platforms spanning tablets, desktops, head-mounted displays, and large-area room-sized curved tiled displays. The paper concludes by exploring several potential use cases including immersive social storytelling, learning about culture and crowd-sourced tourism.
Fusing Multimedia Data Into Dynamic Virtual EnvironmentsRuofei Du
In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments.
First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatio-temporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos.
We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. We conduct a user study with 20 participants to qualitatively evaluate Social Street View and Geollery. The user study has identified several use cases for these systems, including immersive social storytelling, experiencing culture, and crowd-sourced tourism.
We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures, haptic feedback, and visual cryptography in virtual and augmented reality environments.
Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education.
WEB SCRAPER UTILIZES GOOGLE STREET VIEW IMAGES TO POWER A UNIVERSITY TOURijcsit
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
Incredible presentation of the best examples of Creativity in Internet. I would desribe it as Creativity 2.0 – creative and inspirational ideas and web projects from recent times, featuring sections on visualisation. The author is Tom Uglow - a creative director for Google & YouTube in Europe.
Geollery: A Mixed Reality Social Media PlatformRuofei Du
We present Geollery, an interactive mixed reality social media platform for creating, sharing, and exploring geotagged information. Geollery introduces a real-time pipeline to progressively render an interactive mirrored world with three-dimensional (3D) buildings, internal user-generated content, and external geotagged social media. This mirrored world allows users to see, chat, and collaborate with remote participants with the same spatial context in an immersive virtual environment. We describe the system architecture of Geollery, its key interactive capabilities, and our design decisions. Finally, we conduct a user study with 20 participants to qualitatively compare Geollery with another social media system, Social Street View. Based on the participants' responses, we discuss the benefits and drawbacks of each system and derive key insights for designing an interactive mirrored world with geotagged social media. User feedback from our study reveals several use cases for Geollery including travel planning, virtual meetings, and family gathering.
Social Street View: Blending Immersive Street Views with Geo-tagged Social MediaRuofei Du
Paper and videos: http://www.socialstreetview.com
This paper presents an immersive geo-spatial social media system for virtual and augmented reality environments. With the rapid growth of photo-sharing social media sites such as Flickr, Pinterest, and Instagram, geo-tagged photographs are now ubiquitous. However, the current systems for their navigation are unsatisfyingly one- or two-dimensional. In this paper, we present our prototype system, Social Street View, which renders the geo-tagged social media in its natural geo-spatial context provided by immersive maps, such as Google Street View. This paper presents new algorithms for fusing and laying out the social media in an aesthetically pleasing manner with geospatial renderings, validates them with respect to visual saliency metrics, suggests spatio-temporal filters, and presents a system architecture that is able to stream geo-tagged social media and render it across a range of display platforms spanning tablets, desktops, head-mounted displays, and large-area room-sized curved tiled displays. The paper concludes by exploring several potential use cases including immersive social storytelling, learning about culture and crowd-sourced tourism.
Fusing Multimedia Data Into Dynamic Virtual EnvironmentsRuofei Du
In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments.
First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatio-temporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos.
We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. We conduct a user study with 20 participants to qualitatively evaluate Social Street View and Geollery. The user study has identified several use cases for these systems, including immersive social storytelling, experiencing culture, and crowd-sourced tourism.
We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures, haptic feedback, and visual cryptography in virtual and augmented reality environments.
Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education.
WEB SCRAPER UTILIZES GOOGLE STREET VIEW IMAGES TO POWER A UNIVERSITY TOURijcsit
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
Science of culture? Computational analysis and visualization of cultural imag...Lev Manovich
Concepts, research questions and examples of computational analysis and visualizations of cultural image collections from our research lab (softwarestudies.com) created between 2009 and 2015. Visualized datasets include 20,000 images from MoMA photo collection, 773 Vincent van Gogh paintings, and 2.3 million Instagram images from 13 cities worldwide. (Note that the original presentation has a few videos that are not part of this PDF document.)
Psychological Maps 2.0: A Web Engagement Enterprise Starting in LondonGabriela Agustini
Planners and social psychologists have suggested that the recognizability of the urban environment is linked to peo- ple’s socio-economic well-being. We build a web game that puts the recognizability of London’s streets to the test. It follows as closely as possible one experiment done by Stanley Milgram in 1972. The game picks up random locations from Google Street View and tests users to see if they can judge the location in terms of closest subway station, borough, or region. Each participant dedicates only few minutes to the task (as opposed to 90 minutes in Milgram’s). We col- lect data from 2,255 participants (one order of magnitude a larger sample) and build a recognizability map of Lon- don based on their responses. We find that some boroughs have little cognitive representation; that recognizability of an area is explained partly by its exposure to Flickr and Foursquare users and mostly by its exposure to subway pas- sengers; and that areas with low recognizability do not fare any worse on the economic indicators of income, education, and employment, but they do significantly suffer from social problems of housing deprivation, poor living conditions, and crime. These results could not have been produced without analyzing life off- and online: that is, without considering the interactions between urban places in the physical world and their virtual presence on platforms such as Flickr and Foursquare. This line of work is at the crossroad of two emerging themes in computing research - a crossroad where “web science” meets the “smart city” agenda.
29 March 2019 Presentation on the relation of digital and virtual heritage to digital humanities, issues, some projects..at Curtin University Perth Australia
Are museums a dial that only goes to 5? Michael Edson
For Social Media Week, Washington, D.C., "Defining and measuring social media success in museums and arts organizations." http://socialmediaweek.org/blog/event/are-you-remarkable-defining-and-measuring-social-media-success-in-museums-and-arts-organizations/#.US4XyOtARCQ
Online social media services enable people to share many aspects of their personal interests and passions with friends, acquaintances and strangers. We are investigating how the display of social media in a workplace context can improve relationships among collocated colleagues. We have designed, developed and deployed the Context, Content and Community Collage, which runs on large LCD touchscreen computers installed in eight locations throughout a research laboratory. This proactive display application senses nearby people via Bluetooth phones, and responds by incrementally adding photos associated with those people to an ambient collage shown on the screen. This paper describes the motivations, goals, design and impact of the system, highlighting the ways the system has increased interactions and improved personal relationships among coworkers at the deployment site. We also look at how the creation of a shared physical window into online media has affected the use of that media
Montage4D: Interactive Seamless Fusion of Multiview Video TexturesRuofei Du
Project Site: http://montage4d.com
The commoditization of virtual and augmented reality devices and the availability of inexpensive consumer depth cameras have catalyzed a resurgence of interest in spatiotemporal performance capture. Recent systems like Fusion4D and Holoportation address several crucial problems in the real-time fusion of multiview depth maps into volumetric and deformable representations. Nonetheless, stitching multiview video textures onto dynamic meshes remains challenging due to imprecise geometries, occlusion seams, and critical time constraints. In this paper, we present a practical solution towards real-time seamless texture montage for dynamic multiview reconstruction. We build on the ideas of dilated depth discontinuities and majority voting from Holoportation to reduce ghosting effects when blending textures. In contrast to their approach, we determine the appropriate blend of textures per vertex using view-dependent rendering techniques, so as to avert fuzziness caused by the ubiquitous normal-weighted blending. By leveraging geodesics-guided diffusion and temporal texture fields, our algorithm mitigates spatial occlusion seams while preserving temporal consistency. Experiments demonstrate significant enhancement in rendering quality, especially in detailed regions such as faces. We envision a wide range of applications for Montage4D, including immersive telepresence for business, training, and live entertainment.
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
Science of culture? Computational analysis and visualization of cultural imag...Lev Manovich
Concepts, research questions and examples of computational analysis and visualizations of cultural image collections from our research lab (softwarestudies.com) created between 2009 and 2015. Visualized datasets include 20,000 images from MoMA photo collection, 773 Vincent van Gogh paintings, and 2.3 million Instagram images from 13 cities worldwide. (Note that the original presentation has a few videos that are not part of this PDF document.)
Psychological Maps 2.0: A Web Engagement Enterprise Starting in LondonGabriela Agustini
Planners and social psychologists have suggested that the recognizability of the urban environment is linked to peo- ple’s socio-economic well-being. We build a web game that puts the recognizability of London’s streets to the test. It follows as closely as possible one experiment done by Stanley Milgram in 1972. The game picks up random locations from Google Street View and tests users to see if they can judge the location in terms of closest subway station, borough, or region. Each participant dedicates only few minutes to the task (as opposed to 90 minutes in Milgram’s). We col- lect data from 2,255 participants (one order of magnitude a larger sample) and build a recognizability map of Lon- don based on their responses. We find that some boroughs have little cognitive representation; that recognizability of an area is explained partly by its exposure to Flickr and Foursquare users and mostly by its exposure to subway pas- sengers; and that areas with low recognizability do not fare any worse on the economic indicators of income, education, and employment, but they do significantly suffer from social problems of housing deprivation, poor living conditions, and crime. These results could not have been produced without analyzing life off- and online: that is, without considering the interactions between urban places in the physical world and their virtual presence on platforms such as Flickr and Foursquare. This line of work is at the crossroad of two emerging themes in computing research - a crossroad where “web science” meets the “smart city” agenda.
29 March 2019 Presentation on the relation of digital and virtual heritage to digital humanities, issues, some projects..at Curtin University Perth Australia
Are museums a dial that only goes to 5? Michael Edson
For Social Media Week, Washington, D.C., "Defining and measuring social media success in museums and arts organizations." http://socialmediaweek.org/blog/event/are-you-remarkable-defining-and-measuring-social-media-success-in-museums-and-arts-organizations/#.US4XyOtARCQ
Online social media services enable people to share many aspects of their personal interests and passions with friends, acquaintances and strangers. We are investigating how the display of social media in a workplace context can improve relationships among collocated colleagues. We have designed, developed and deployed the Context, Content and Community Collage, which runs on large LCD touchscreen computers installed in eight locations throughout a research laboratory. This proactive display application senses nearby people via Bluetooth phones, and responds by incrementally adding photos associated with those people to an ambient collage shown on the screen. This paper describes the motivations, goals, design and impact of the system, highlighting the ways the system has increased interactions and improved personal relationships among coworkers at the deployment site. We also look at how the creation of a shared physical window into online media has affected the use of that media
Montage4D: Interactive Seamless Fusion of Multiview Video TexturesRuofei Du
Project Site: http://montage4d.com
The commoditization of virtual and augmented reality devices and the availability of inexpensive consumer depth cameras have catalyzed a resurgence of interest in spatiotemporal performance capture. Recent systems like Fusion4D and Holoportation address several crucial problems in the real-time fusion of multiview depth maps into volumetric and deformable representations. Nonetheless, stitching multiview video textures onto dynamic meshes remains challenging due to imprecise geometries, occlusion seams, and critical time constraints. In this paper, we present a practical solution towards real-time seamless texture montage for dynamic multiview reconstruction. We build on the ideas of dilated depth discontinuities and majority voting from Holoportation to reduce ghosting effects when blending textures. In contrast to their approach, we determine the appropriate blend of textures per vertex using view-dependent rendering techniques, so as to avert fuzziness caused by the ubiquitous normal-weighted blending. By leveraging geodesics-guided diffusion and temporal texture fields, our algorithm mitigates spatial occlusion seams while preserving temporal consistency. Experiments demonstrate significant enhancement in rendering quality, especially in detailed regions such as faces. We envision a wide range of applications for Montage4D, including immersive telepresence for business, training, and live entertainment.
Video Fields: Fusing Multiple Surveillance Videos into a Dynamic Virtual Envi...Ruofei Du
Paper and videos: http://www.videofields.com
Another interesting project: http://socialstreetview.com
Video Fields system fuses multiple videos, camera-world matrices from a calibration interface, static 3D models, as well as satellite imagery into a novel dynamic virtual environment. Video Fields integrates automatic segmentation of moving entities during the rendering pass and achieves view-dependent rendering in two ways: early pruning and deferred pruning. Video Fields takes advantage of the WebGL and WebVR technology to achieve cross-platform compatibility across smart phones, tablets, desktops, high-resolution tiled curved displays, as well as virtual reality head-mounted displays. See the supplementary video at http://video-fields.com.
One of my undergraduate slides in 2010. This slides introduce the Chinese Calligraphy. For Chinese script and more information, please visit: http://course.duruofei.com/cs039/
Statistics for K-mer Based Splicing AnalysisRuofei Du
http://www.duruofei.com/Public/course/CMSC702/CMSC702_Kmer_Ruofei_Du.pdf
It is well acknowledged that alternative splicing module plays a crucial role to identify the variations of the RNA transcriptomes. In high-throughput short-read RNA, splicing analysis is a challenging task due to the uncertainty and time complexity of reads alignments onto genome and transcriptome. In this paper, we introduce k-mer based statistical method for splicing event analysis. The k-mer based representation avoids timeconsuming reads alignment, and the significant differential k-mers between controlled group of samples are a good indicator of existence of certain types of splicing events. We explored statistical models including t-test, DESeq and likelihood ratio test to identify statistical significant differential k-mers. We also develop a faset k-mer mapping method instead of Bowtie for identifying whether a k-mer from reads data can be matched on genome or transcriptome.
Online Vigilance Analysis Combining Video and Electrooculography FeaturesRuofei Du
http://www.duruofei.com/Research/drowsydriving
In this paper, we propose a novel system to analyze vigilance level combining both video and Electrooculography (EOG) features. For one thing,
the video features extracted from an infrared camera include percentage of closure (PERCLOS) and eye blinks, slow eye movement (SEM), rapid eye movement (REM) are also extracted from EOG signals. For another, other features like yawn frequency, body posture and face orientation are extracted from the video by using Active Shape Model (ASM). The results of our experiments indicate that our approach outperforms the existing approaches based on either video or EOG merely. In addition, the prediction offered by our model is in close proximity to the actual error rate of the subject. We firmly believe that this method can be widely applied to prevent accidents like fatigued driving in the future.
Deliberately Planning and Acting for Angry Birds with Refinement MethodsRuofei Du
Authors: Ruofei Du, Zebao Gao, Zheng Xu; Advisors: Prof. Dana S. Nau and Dr. Vikas Shivashankar; This is a supplementary video for CMSC 722 AI Planning.
Check out papers, slides and more here: http://duruofei.com/Research/angrybirds
Angry Birds has been a popular game throughout the world since 2009. The goal of the game is to destroy all the pigs and as many obstacles as possible using a limited number of birds. Since the game environment is subject to change tremendously after each shot, a deterministic planning model is very likely to fail. In this paper, we integrate deliberately planning and acting for Angry Birds with refinement methods. Specifically, we design a refinement acting engine (RAE) based on ARP-interleave with Sequential Refinement Planning Engine (SeRPE). In addition, we implement greedy algorithm, Depth First Forward Search (DFFS) and $A^*$ algorithm to perform the actor's deliberation functions. Eventually, we evaluate our agent to solve the web version of Angry Birds in Chrome using the client-server platform provided by the IJCAI 2015 AI Birds Competition. In our experiments, we find out that our agent using SeRPE with $A^*$ algorithm greatly outperforms the agent using greedy algorithm or forward search without SeRPE. In this way, we prove the significance of refinement methods for planning in practice. Please see the supplementary video \url{https://youtu.be/u7XJ0g6d9po} for more results.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
4. An Overview of Sugarcane White Leaf Disease in Vietnam.pdf
Project Geollery.com: Reconstructing a Live Mirrored World With Geotagged Social Media
1. Project Geollery.com: Reconstructing a Live Mirrored World
With Geotagged Social Media
Ruofei Du†
, David Li†
, and Amitabh Varshney
{ruofei, dli7319, varshney}@umiacs.umd.edu | www.Geollery.com | Web3D 2019, Los Angeles, USA
UMIACS THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LAB
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLAND, COLLEGE PARK
Project Geollery.com: Reconstructing a Live Mirrored World
With Geotagged Social Media
33. [I will use it for] exploring new places. If
I am going on vacation somewhere, I
could immerse myself into the location.
If there are avatars around that area, I
could ask questions.
P1 / M
33
34. I think it (Geollery) will be useful for
families. I just taught my grandpa how
to use Facetime last week and it would
great if I could teleport to their house
and meet with them, then we could
chat and share photos with our
avatars.
P2 / F
34
35. What if we could reconstruct a high-quality, all
textured, walkable mirrored world with
geotagged social media in real time?
35
77. LRU Cache
Least Recently Used
77
5 adjacent street views are cached while users are walking.
Each geometry has 131,074 vertices to be processed by the GPU
81. Geollery/Social Street View has its own set of
distinct offerings, as it is anchored within
real-world settings, just mapped onto VR, whereas
these are definitely more ‘fantasy’ type of arenas.
In that way, as you have already done, I think there
are multitude game challenges/tasks/feedback, like
the balloons, to add in!
Email feedback from pilot users
81
82. I think it’d be cool if you could see posts by people
in real time, along with the establishment they’re in
(like someone tweeting from inside McDonald’s or
a movie theater), if that makes sense. Sort of like
checking in to a place on Facebook
Email feedback from pilot users
82
84. Contributing a large-scale real-time system to
Reconstruct a Mirrored World
without the prior knowledge of any 3D models but only street view
images and depth maps, which may be estimated from deep learning
pipeline etc.
84
85. Establishing a web-based platform at
Geollery.com
for visualizing geotagged social media in a collaborative
mixed-reality setting.
85
99. Thank you!
Ruofei Du, David Li, and Amitabh Varshney
{ruofei, dli7319, varshney}@cs.umd.edu | www.Geollery.com | CHI 2019
UMIACS THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LAB
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLAND, COLLEGE PARK
Thank you!
100. Project Geollery.com: Reconstructing a Live Mirrored World
With Geotagged Social Media
Ruofei Du, David Li, and Amitabh Varshney
{ruofei, dli7319, varshney}@umiacs.umd.edu | www.Geollery.com | CHI 2019 | Demo at D-2 (INT-40)
UMIACS THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LAB
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLAND, COLLEGE PARK
Project Geollery.com: Reconstructing a Live Mirrored World
With Geotagged Social Media
105. Post Interview
Question 1/3
Suppose that we have a polished 3D social media
platform like Geollery or Social Street View, would you
like to use it? If so, how much time would you like to
spend on it?
105
107. I would like to use it every day
when I go to work, or travel
during weekends.
P6 / F
107
108. If it’s not distracting like
Facebook and Instagram, I
would use it every day on a
couple of things.
P17 / F
108
109. I am a follower on most social
media sites. I would only join a
3D social media platform once
my friends are there.
P4 / M
109
110. If my friends are all on this, I
can see myself spend a couple
of hours every week.
P12 / M
110
111. I don’t think I will use this. I
prefer to use Yelp to see
comments [of nearby
restaurants]
P12 / M
111
112. Post Interview
Question 2/3
Can you imagine your use cases for Geollery and Social
Street View? What would you like to use 3D social media
platforms for?
112
113. I would like to use it for the food in
different restaurants. I am always
hesitating of different restaurants. It
will be very easy to see all restaurants
with street views. In Yelp, I can only see
one restaurant at a time.
P6 / F
113
114. [I will use it for] exploring new places. If
I am going on vacation somewhere, I
could immerse myself into the location.
If there are avatars around that area, I
could ask questions.
P1 / M
114
115. I think it (Geollery) will be useful for
families. I just taught my grandpa how
to use Facetime last week and it would
great if I could teleport to their house
and meet with them, then we could
chat and share photos with our
avatars.
P2 / F
115
116. … for communicating with my families,
maybe, and distant friends, [so] they
can see New York. And, getting to
know more people, connecting with
people based on similar interests.
P2 / F
116
117. Post Interview
Question 3/3
If you were a designer or product manager for Geollery
or Social Street View, what features would you like to
add to the systems?
117
118. A mapping of the texture,
high-resolution texture, will be
great.
P12 / M
118
119. if there is a way to unify the
interaction between them,
there will be more realistic
buildings [and] you could have
more roof structures. Terrains
will be interesting to add on.
P18 / M
119
120. I would like to see kitties and
puppies running around, and
birds flying in the air
P13 / F
120
121. I could also add a bike, add a
vehicle, a motorcycle in
Geollery, this will add some
fun.
P17 / F
121