journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Lattice Energy LLC- Electroweak Neutron Production and Capture During Lightni...Lewis Larsen
Abstract. Electroweak production of low-energy neutrons in terrestrial lightning discharges was predicted by the Widom-Larsen theory of low energy nuclear reactions (LENRs) and recently confirmed by new Russian data. Contrary to longstanding belief, this data implies that neutron-capture-driven, non-stellar nucleosynthetic processes have likely been occurring in the environs of earth since it condensed as a planetary body ~4.5 billion years ago. Moreover, some researchers have recently suggested that lightning was significantly involved in dust processing during the era of the presolar nebula; if true, this would push non-stellar nucleosynthesis within the solar system even further back in time. Altogether, present thinking about types of nuclear processes affecting the chemical evolution of the earth and solar system may require revision.
stellar fusion:A star is a hot ball of mostly hydrogen gas; the Sun is an example of a typical, ordinary star.
In the core of the star, the temperature and densities are high enough to sustain nuclear fusion reactions, and the energy produced by these reactions works its way to the surface and radiates into space as heat and light.
The process of building up heavier elements from lighter ones by nuclear reactions, is called stellar evolution.The stars' fuel for energy generation is the stuff they are made of -- hydrogen, helium, carbon, etc. -- which they burn by converting these elements into heavier elements.
"Burning" in this context does not refer to the kind of burning we are familiar with, such as the burning of wood or coal, which is chemical burning.
It refers to nuclear burning, in which the nuclei of atoms fuse into nuclei of heavier atoms.
When stars start their lives, they consist mostly of hydrogen, some helium, and small amounts of heavier elements, such as carbon, nitrogen, and oxygen.
In 1938, Hans Albrecht and Weizsacker analyzed two process in stars, p-p chain and CNO cycle and believed to be the source of energy in Stars. They showed the possibility to converting hydrogen into helium through nuclear reaction. These reactions take place at high temperature and high densities.
The energy of sun comes from nuclear fusion reaction. P-P cycle is dominant.
This starts with fusion of hydrogen nuclei, to produce deuterium :
The deuterium then fuses with more hydrogen to produce 3He via electromagnetic interaction :
And finally, two 3He nuclei fuse to form 4He via the nuclear strong interaction:
A very large amount of energy is released in this reaction
because 4He Doubly magic nucleus and so is very tightly bound.
Combing these equations we have ,
Because the temperature of the Sun is 107K, matter in this state
is referred to as a plasma.
The positrons produced above will annihilate with electrons in
the plasma to release a further 1.02 MeV and so the total energy
released is 26.72 Mev.
Another interesting cycle is the carbon, or CNO chain.
The CNO cycle (for carbon–nitrogen–oxygen) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium hydrogen, using carbon, nitrogen, and oxygen , the other being the proton–proton chain reaction (pp-chain reaction).
Fusion processes continue to produce heavier elements until the core of the stellar object is composed mainly of nuclei with A 56, i.e.( the peak of the
binding energy per nucleon curve).
Heavier nuclei are produced in supernova explosions, but this is properly the subject of astrophysics and
we will not pursue it further.
Slide presentation for MS chemistry unit describing formation of the elements. Presentation uses photos from Hubble Space Telescope. Ends with open writing exercise.
Lattice Energy LLC- Electroweak Neutron Production and Capture During Lightni...Lewis Larsen
Abstract. Electroweak production of low-energy neutrons in terrestrial lightning discharges was predicted by the Widom-Larsen theory of low energy nuclear reactions (LENRs) and recently confirmed by new Russian data. Contrary to longstanding belief, this data implies that neutron-capture-driven, non-stellar nucleosynthetic processes have likely been occurring in the environs of earth since it condensed as a planetary body ~4.5 billion years ago. Moreover, some researchers have recently suggested that lightning was significantly involved in dust processing during the era of the presolar nebula; if true, this would push non-stellar nucleosynthesis within the solar system even further back in time. Altogether, present thinking about types of nuclear processes affecting the chemical evolution of the earth and solar system may require revision.
stellar fusion:A star is a hot ball of mostly hydrogen gas; the Sun is an example of a typical, ordinary star.
In the core of the star, the temperature and densities are high enough to sustain nuclear fusion reactions, and the energy produced by these reactions works its way to the surface and radiates into space as heat and light.
The process of building up heavier elements from lighter ones by nuclear reactions, is called stellar evolution.The stars' fuel for energy generation is the stuff they are made of -- hydrogen, helium, carbon, etc. -- which they burn by converting these elements into heavier elements.
"Burning" in this context does not refer to the kind of burning we are familiar with, such as the burning of wood or coal, which is chemical burning.
It refers to nuclear burning, in which the nuclei of atoms fuse into nuclei of heavier atoms.
When stars start their lives, they consist mostly of hydrogen, some helium, and small amounts of heavier elements, such as carbon, nitrogen, and oxygen.
In 1938, Hans Albrecht and Weizsacker analyzed two process in stars, p-p chain and CNO cycle and believed to be the source of energy in Stars. They showed the possibility to converting hydrogen into helium through nuclear reaction. These reactions take place at high temperature and high densities.
The energy of sun comes from nuclear fusion reaction. P-P cycle is dominant.
This starts with fusion of hydrogen nuclei, to produce deuterium :
The deuterium then fuses with more hydrogen to produce 3He via electromagnetic interaction :
And finally, two 3He nuclei fuse to form 4He via the nuclear strong interaction:
A very large amount of energy is released in this reaction
because 4He Doubly magic nucleus and so is very tightly bound.
Combing these equations we have ,
Because the temperature of the Sun is 107K, matter in this state
is referred to as a plasma.
The positrons produced above will annihilate with electrons in
the plasma to release a further 1.02 MeV and so the total energy
released is 26.72 Mev.
Another interesting cycle is the carbon, or CNO chain.
The CNO cycle (for carbon–nitrogen–oxygen) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium hydrogen, using carbon, nitrogen, and oxygen , the other being the proton–proton chain reaction (pp-chain reaction).
Fusion processes continue to produce heavier elements until the core of the stellar object is composed mainly of nuclei with A 56, i.e.( the peak of the
binding energy per nucleon curve).
Heavier nuclei are produced in supernova explosions, but this is properly the subject of astrophysics and
we will not pursue it further.
Slide presentation for MS chemistry unit describing formation of the elements. Presentation uses photos from Hubble Space Telescope. Ends with open writing exercise.
Lattice Energy LLC - HESS Collaboration reports evidence for PeV cosmic rays ...Lewis Larsen
HESS Collaboration has published important paper in Nature: detected gamma rays coming from Milky Way’s black hole indicating that PeV cosmic rays come from same source. Widom-Larsen-Srivastava theory provides many-body collective mechanism that can accelerate protons to PeV and higher energies in the immediate vicinity of such black holes. Cosmic ray particle energies depend upon field strength in magnetic structures, size of structure, and duration of charged particle accleration.
Some think, there are only 3 states of matter. Well I say they are wrong. This slide show gives all the require info about the 4th and 5th states of matter, i.e. Plasma and Bose Einstein Condensate.
In the quantum level, there are profound differences between fermions (follows Fermi-Dirac statistic) and bosons (follows Bose-Einstein statistic).
As a gas of bosonic atoms is cooled very close to absolute zero temperature, their characteristic will change dramatically.
More accurately when its temperature below a critical temperature Tc, a large fraction of the atoms condenses in the lowest quantum states .
This dramatic phenomenon is known as Bose-Einstein condensation
Lattice Energy LLC - Many body collective magnetic mechanism creates ultrahig...Lewis Larsen
“The main reason why the origin of cosmic rays(CRs) is still unknown, one century after their discovery, is that they are charged nuclei isotropized by the turbulent magnetic field in the Galaxy to such a high degree that their observed flux is essentially identical in all directions, with no sources or decisive hot spots identified in any region the sky …” - Etienne Parizot (Univ. of Paris-Diderot), Nuclear Physics B (2014)
In 2008 (arXiv) and 2010 (Pramana), we derived and published approximate, rule-of-thumb formulas for calculating estimated one-shot, mean center-of-mass acceleration energies for charged particles present in plasma-filled magnetic flux tubes (also called “coronal loops”) for two cases: (1) steady-state and (2) explosive destruction of an unstable flux tube (this second case is subset of “magnetic reconnection” processes).
Our simple equations for magnetic flux tubes are robust and scale-independent. They consequently have broad applicability from exploding wires (which in early stages of explosion comprise dense dusty plasmas), lightning, to solar flux tubes and other astrophysical environments that are characterized by vastly higher magnetic fields; these include many other types of stars besides the Sun, neutron stars, magnetars, and regions located near black holes and active galactic nuclei.
Herein we show how plasma-filled magnetic flux tubes likely occur in many different astrophysical systems from relatively small objects (neutron stars and magnetars) to relatively large objects (accretion disks and jet bases of supermassive black holes). When these ordered magnetic structures explode (reconnection, flares), enormous amounts of magnetic energy are converted into kinetic energy of charged particles present inside exploding flux tubes. Using reasonable parametric assumptions, we calculate one-shot, center-of-mass acceleration energies for protons in collapsing protoneutron stars (5.5 x 1018 eV), two cases for BH accretion disks (2.2 x 1017 eV and 0.9 x 1019 eV), and finally for the jet base of a supermassive black hole (2.2 x 1021eV).
What all these numbers suggest, including those for the Sun, is that W-L-S particle acceleration mechanism for magnetic flux tubes can create cosmic ray particles at energies that span the entire cosmic-ray energy spectrum from top to bottom. This argues that commonplace flux tubes may well play a significant role in generating the observed cosmic ray energy spectrum and would be consistent with apparent overall anisotropy of sources at all but the very highest particle energies. That said, we think a number of different acceleration mechanisms likely contribute to entire spectrum, including shock acceleration and perhaps exotic mechanisms such as evaporation of gaseous winds from neutron stars (Widom et al. arXiv:1410.6498v2 -2015).
Lattice Energy LLC - HESS Collaboration reports evidence for PeV cosmic rays ...Lewis Larsen
HESS Collaboration has published important paper in Nature: detected gamma rays coming from Milky Way’s black hole indicating that PeV cosmic rays come from same source. Widom-Larsen-Srivastava theory provides many-body collective mechanism that can accelerate protons to PeV and higher energies in the immediate vicinity of such black holes. Cosmic ray particle energies depend upon field strength in magnetic structures, size of structure, and duration of charged particle accleration.
Some think, there are only 3 states of matter. Well I say they are wrong. This slide show gives all the require info about the 4th and 5th states of matter, i.e. Plasma and Bose Einstein Condensate.
In the quantum level, there are profound differences between fermions (follows Fermi-Dirac statistic) and bosons (follows Bose-Einstein statistic).
As a gas of bosonic atoms is cooled very close to absolute zero temperature, their characteristic will change dramatically.
More accurately when its temperature below a critical temperature Tc, a large fraction of the atoms condenses in the lowest quantum states .
This dramatic phenomenon is known as Bose-Einstein condensation
Lattice Energy LLC - Many body collective magnetic mechanism creates ultrahig...Lewis Larsen
“The main reason why the origin of cosmic rays(CRs) is still unknown, one century after their discovery, is that they are charged nuclei isotropized by the turbulent magnetic field in the Galaxy to such a high degree that their observed flux is essentially identical in all directions, with no sources or decisive hot spots identified in any region the sky …” - Etienne Parizot (Univ. of Paris-Diderot), Nuclear Physics B (2014)
In 2008 (arXiv) and 2010 (Pramana), we derived and published approximate, rule-of-thumb formulas for calculating estimated one-shot, mean center-of-mass acceleration energies for charged particles present in plasma-filled magnetic flux tubes (also called “coronal loops”) for two cases: (1) steady-state and (2) explosive destruction of an unstable flux tube (this second case is subset of “magnetic reconnection” processes).
Our simple equations for magnetic flux tubes are robust and scale-independent. They consequently have broad applicability from exploding wires (which in early stages of explosion comprise dense dusty plasmas), lightning, to solar flux tubes and other astrophysical environments that are characterized by vastly higher magnetic fields; these include many other types of stars besides the Sun, neutron stars, magnetars, and regions located near black holes and active galactic nuclei.
Herein we show how plasma-filled magnetic flux tubes likely occur in many different astrophysical systems from relatively small objects (neutron stars and magnetars) to relatively large objects (accretion disks and jet bases of supermassive black holes). When these ordered magnetic structures explode (reconnection, flares), enormous amounts of magnetic energy are converted into kinetic energy of charged particles present inside exploding flux tubes. Using reasonable parametric assumptions, we calculate one-shot, center-of-mass acceleration energies for protons in collapsing protoneutron stars (5.5 x 1018 eV), two cases for BH accretion disks (2.2 x 1017 eV and 0.9 x 1019 eV), and finally for the jet base of a supermassive black hole (2.2 x 1021eV).
What all these numbers suggest, including those for the Sun, is that W-L-S particle acceleration mechanism for magnetic flux tubes can create cosmic ray particles at energies that span the entire cosmic-ray energy spectrum from top to bottom. This argues that commonplace flux tubes may well play a significant role in generating the observed cosmic ray energy spectrum and would be consistent with apparent overall anisotropy of sources at all but the very highest particle energies. That said, we think a number of different acceleration mechanisms likely contribute to entire spectrum, including shock acceleration and perhaps exotic mechanisms such as evaporation of gaseous winds from neutron stars (Widom et al. arXiv:1410.6498v2 -2015).
ЗАО "МЕТТЭМ - Технологии"
Барьер
ЗАО "МЕТТЭМ - Производство"
ООО "МЕТТЭМ - М"
ЗАО "МЕТТЭМ - Строительные Технологии"
ЗАО "МЕТТЭМ - Светотехника"
ООО "МЕТТЭМ - Спецавтоматика"
ООО "Тифлотек"
и т.д.
Closer to the Edge? Prospects for household debt repayments as interest rates...ResolutionFoundation
The number of families in Britain with perilous levels of debt repayments could more than double to 1.2 million if interest rates rise faster than expected in the next four years and household income growth is weak and uneven. In this extended version (no commentary) the Resolution Foundation's senior economist, Matthew Whittaker, sets out the first full analysis of how rising interest rates could affect families under different scenarios for the recovery in household incomes. A shorter version of these slides (with commentary is available at http://www.slideshare.net/ResolutionFoundation/closer-to-the-edge
Acustica, gli errori si pagano - Sentenza della Corte Costituzionale n°103/2013Seppelfricke SD
La disciplina relativa ai requisiti acustici passivi degli edifici e dei loro componenti non trova applicazione nei rapporti tra privati e, in particolare, nei rapporti tra costruttori, venditori ed acquirenti di alloggi, fermi gli effetti derivanti da pronunce giudiziali passate in giudicato e la corretta esecuzione dei lavori a regola d’arte asseverata da un tecnico abilitato.
Наиболее доступные модели замков МЕТТЭМ®;
Сертифицированы в соответствии с 3 и 2 классами ГОСТ;
Характеризуются высокой надежностью и взломостойкостью при относительно невысокой стоимости изделия;
Находят применение для запирания как металлических и деревянных дверей, так и металлических шкафов;
Могут быть изготовлены как с дополнительными элементами защиты, так и без них.
Vulnerabilities and Exploitation - Application Security Sci-Fi Hipster Editionblaufish
Explaining vulnerabilities, exploits, attack vectors, attack surface reduction, aslr etc to someone who understands The Imperial Deathstar.
Presented at Opkoko 2013.1. Live presentation recording in Swedish here: http://www.youtube.com/watch?v=Xi9SRFENiO4
Astronomy- State of the art is a course covering the hottest topics in astronomy. In this section, the exotic end states of stars are discussed, including pulsars, neutron stars, and black holes.
Theory of Planetary System Formation The mass of the presol.pdfadislifestyle
Theory of Planetary System Formation The mass of the pre-solar molecular cloud played the
largest role in terms of how the solar system formed. It might have begun with a 100 solar mass
cloud approximately 1 to 2 light years in diameter. It's possible that mutual gravitational attraction
between cloud particles was too weak to start the process. When gravity is too weak, the only
other force strong enough to bring a significant number of particles together is the electromagnetic
force. Barring that, perhaps a nearby shockwave from a supernova explosion caused the initial
motion of material: but once started, gravity took hold, causing the inevitable collapse. The
process it underwent followed a pattern that scientists believe is mirrored everywhere a star exists.
In this activity, you will put the solar system formation process parts in order from the beginning. 1.
Planetesimals accreted material until they became large enough to form planets. 2. Gravitational
potential energy of the collapsing gas cloud was converted into thermal energy. 3. Collapsing gas
cloud rotated faster as the collapse continued due to conservation of angular momentum. 4.
Planetesimals were massive enough to have a gravitational field sufficient to attract additional
nearby objects. 5. The random motions of material in the collapsing gas cloud were reduced to the
final motion of the material rotating in a disk. 6. The inner parts of the continuing, flattening cloud
free fell into the growing object at the center. 7. Continued motions brushed smaller particle grains
against larger grains. As this electrostatic "sticking" occurred, the particle grains became larger. 8.
Cloud of molecular gas started to collapse due to gravity or other astrophysical process. Use the
number of the process to order them from earliest to latest (left to right).Temperature and
Formation of Our Solar System Temperature was the key factor leading to the state distribution of
various objects made of different elements and compounds. The graph below shows the
temperature (expressed in kelvins) at different distances from the Sun (expressed in AU ) in the
solar system during the time when the planets were formed. To produce a linear plot, in the usual
sense, the vertical axis is inverted so that temperature goes from high to low starting near the Sun.
Use Figure 1 to fill in Table 1 with the formation temperatures for each planet, including the dwarf
planet, Ceres.Aktranomy 1511 Laboratory Manua! Bond albedo refers to the total radiation
reflected from an object compared to the total incident radiation from the Sun. The geometric
albedo refers to the amount of radiation equally reflected in all directions at all wavelengths off an
object. It is clear that a wide range of planetary formation temperatures existed in the early solar
system. The temperatures at which different compounds form or for which elements have physical
state changes will vary. Since the majority of material in the molecular cl.
PHY 1301, Physics I 1 Course Learning Outcomes forajoy21
PHY 1301, Physics I 1
Course Learning Outcomes for Unit VII
Upon completion of this unit, students should be able to:
7. Describe fundamental thermodynamic concepts.
7.1 Explain the various heat transfer mechanisms with practical examples.
7.2 Recognize the ideal gas law and apply it to daily life.
7.3 Describe the relationship between kinetic energy and the Kelvin temperature.
Course/Unit
Learning Outcomes
Learning Activity
7.1
Unit Lesson
Chapter 13
Chapter 14
Unit VII PowerPoint Presentation
7.2
Unit Lesson
Chapter 13
Chapter 14
Unit VII PowerPoint Presentation
7.3
Unit Lesson
Chapter 13
Chapter 14
Unit VII PowerPoint Presentation
Required Unit Resources
Chapter 13: The Transfer of Heat, pp. 360–379
Chapter 14: The Ideal Gas Law and Kinetic Theory, pp. 380–400
Unit Lesson
UNIT VII STUDY GUIDE
Heat Mechanism and Kinetic Theory
PHY 1301, Physics I 2
UNIT x STUDY GUIDE
Title
The Three Methods to Transfer Heat
The above image illustrates the three heat transfer methods. The sun heats the Earth by radiation, the
surface of the Earth heats the air by conduction, and the warm air rises by convection.
What is heat? Heat is energy that moves from a high-temperature object to a low-temperature object. Its unit
is the Joule (J), but sometimes it is measured with the kilocalorie (kcal). The conversion factor between the
two units is 1 kcal = 4186 J. The transfer of heat is processed by the following mechanisms.
Conduction is the process in which heat is transferred through a material. The atoms or molecules in a hotter
part of the material have greater energy than those in a colder part of the material, and thus the energy is
transferred from the hotter place to the colder place. Notice that the bulk motion of the material has nothing to
do with this process. You can easily find examples of conduction. A radiator in your house is one of them. If
you put an object on the radiator, the object will become warmer. Another example is when you pour the
brewed hot coffee into a cold cup; the heat from the hot coffee makes the cup itself hot.
The heat Q conducted during a time t through a bar of length L and cross-sectional area A is expressed as
Q = kA (dT) t / L. Here, k is thermal conductivity, and it depends on the substance; dT is the temperature
difference between the higher temperature and the lower temperature of the bar.
Convection is the process in which heat is transferred by the bulk motion of a fluid. According to the ideal gas
law for constant pressure, the volume (V) is proportional to the temperature (T). V increases as T increases,
and the density decreases within the constant mass. Warm air rises and cooler air goes down; this circulation
makes the energy transported. The generated energy from the center of the sun is transported by convection
near the photosphere. Cool gas sinks while bubbles of hot gas rise. There is a patchwork patte ...
Magma oceans and enhanced volcanism on TRAPPIST-1 planets due to induction he...Sérgio Sacani
Low-mass M stars are plentiful in the Universe and often host small, rocky planets detectable with current instrumentation.
These stars host magnetic fields, some of which have been observed to exceed a few hundred gauss. Recently, seven small
planets have been discovered orbiting the ultra-cool M dwarf TRAPPIST-1, which has an observed magnetic field of 600 G. We
suggest electromagnetic induction heating as an energy source inside these planets. If the stellar rotation and magnetic dipole
axes are inclined with respect to each other, induction heating can melt the upper mantle and enormously increase volcanic
activity, sometimes producing a magma ocean below the planetary surface. We show that induction heating leads the four
innermost TRAPPIST-1 planets, one of which is in the habitable zone, either to evolve towards a molten mantle planet, or to
experience increased outgassing and volcanic activity, while the three outermost planets remain mostly unaffected.
Lattice Energy LLC - Neutron production and nucleosynthesis in electric disch...Lewis Larsen
LENR transmutations can occur all around us. Neutrons can be created when Hydrogen atoms (protons) are present within many different types of electric discharges that can include among diverse other things: atmospheric lightning on earth and other planets, arcs between electrodes in air, water, hydrocarbons, as well as in nano-arcs (internal shorts) that can occur in electrochemical batteries.
The extremely high albedo of LTT 9779 b revealed by CHEOPSSérgio Sacani
Optical secondary eclipse measurements of small planets can provide a wealth of information about the reflective properties
of these worlds, but the measurements are particularly challenging to attain because of their relatively shallow depth. If such signals
can be detected and modeled, however, they can provide planetary albedos, thermal characteristics, and information on absorbers in
the upper atmosphere.
Aims. We aim to detect and characterize the optical secondary eclipse of the planet LTT 9779 b using the CHaracterising ExOPlanet
Satellite (CHEOPS) to measure the planetary albedo and search for the signature of atmospheric condensates.
Methods. We observed ten secondary eclipses of the planet with CHEOPS. We carefully analyzed and detrended the light curves using
three independent methods to perform the final astrophysical detrending and eclipse model fitting of the individual and combined light
curves.
Results. Each of our analysis methods yielded statistically similar results, providing a robust detection of the eclipse of LTT 9779 b
with a depth of 115±24 ppm. This surprisingly large depth provides a geometric albedo for the planet of 0.80+0.10
−0.17, consistent with
estimates of radiative-convective models. This value is similar to that of Venus in our own Solar System. When combining the eclipse
from CHEOPS with the measurements from TESS and Spitzer, our global climate models indicate that LTT 9779 b likely has a super
metal-rich atmosphere, with a lower limit of 400× solar being found, and the presence of silicate clouds. The observations also reveal
hints of optical eclipse depth variability, but these have yet to be confirmed.
Conclusions. The results found here in the optical when combined with those in the near-infrared provide the first steps toward
understanding the atmospheric structure and physical processes of ultrahot Neptune worlds that inhabit the Neptune desert.
http://www.ces.fau.edu/nasa/mod
ule-2/how-greenhouse-effect-
works.php
This figure shows the blackbody spectra of Earth and sun. The incoming radiation from
the sun is much more intense (Y-axis) than that of outgoing radiation from the Earth
because the energy emitted from a blackbody is proportionate to its temperature to the
fourth (σT4) – i.e. the sun emits a far greater amount of energy than the Earth. Incoming
solar radiation is shortwave (X-axis, wavelength in microns) and in the wavelength range
of ultraviolet and visible radiation (shown as the rainbow spectrum of colors). Outgoing
Earth’s radiation is long wave and and is in the range of infrared radiation (shown in red).
Below the blackbody spectra, molecules in the atmosphere, known as greenhouse gases,
interfere with incoming and outgoing radiation. For instance, ozone (O3) in the
stratosphere absorbs some of incoming radiation and is known as the ozone layer. That
said, greenhouse gases (N2O, O3, CO2, and H2O) mainly interfere with outgoing radiation.
Let’s talk about the molecular motion of these greenhouse gases to understand the
greenhouse effect.
Molecular Motions and the Greenhouse Gases H2O and CO2
2349cm-1 667cm-1
Here are the physical causes (molecular motion) of the greenhouse effect. But first… it
may be a bit chunky, so sit back, take a deep breath!
Gas molecules can absorb or emit radiation in the infrared range in two different
ways. One way is by changing the rate at which the molecules rotate. The theory of
quantum mechanics describes the behavior of matter on a microscopic scale – that is,
the size of molecules and smaller. According to this theory, molecules can rotate only
at certain discrete frequencies as if vibrations of a piano string in that they tend to be
at specific “ringing” frequencies. (The rotation frequency is the number of revolutions
that a molecule completes per second.) The molecule can absorb incident wave
(energy), if this incident wave has just the right frequency.
This frequency of the radiation that can be absorbed or emitted depends on the
molecule’s structure. The H2O molecule is constructed in such a manner that it
absorbs infrared radiation of wavelengths of about 12 micrometers and longer. This
interaction gives rise to a very strong absorption feature in Earth’s atmosphere called
the H2O rotation band. As shown in the previous slide, virtually 100 % of infrared
radiation longer than 12 micrometers is absorbed with a combination of CO2 and H2O.
(By the way, the H2O rotation band extends all the way into the microwave region of
the electromagnetic spectrum, i.e. above a wavelength of 1000 micrometer, which is
why a microwave oven is able to heat up anything that contains water.)
Molecular Motions and the Greenhouse Gases H2O and CO2
2349cm-1 667cm-1
The second way in which molecules can absorb or emit infrared radiation is by changing
the amplitude at which they vibrate. Molecules ...
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
The paper addresses the importance of welding design to prevent corrosion at steel. Welding is
used to join pipe, profiles at bridges, spindle, and a lot more part of engineering construction. The
problems happened associated with welding are common issues in these fields, especially corrosion.
Corrosion can be reduced with many methods, they are painting, controlling humidity, and also good
welding design. In the research, it can be found that reducing residual stress on the welding can be
solved in corrosion rate reduction problem.
Preheating on 500oC and 600oC give better condition to reduce corosion rate than condition after
preheating 400oC. For all welding groove type, material with 500oC and 600oC preheating after 14 days
corrosion test is 0,5%-0,69% lost. Material with 400oC preheating after 14 days corrosion test is 0,57%-0,76%
lost.
Welding groove also influence corrosion rate. X and V type welding groove give better condition to reduce
corrosion rate than use 1/2V and 1/2 X welding groove. After 14 days corrosion test, the samples with
X welding groove type is 0,5%-0,57% lost. The samples with V welding groove after 14 days corrosion test is
0,51%-0,59% lost. The samples with 1/2V and 1/2X welding groove after 14 days corrosion test is 0,58%-
0,71% lost.
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
One of the major environmental concerns is the disposal of the waste materials and utilization of
industrial by products. Lime stone quarries will produce millions of tons waste dust powder every year. Having
considerable high degree of fineness in comparision to cement this material may be utilized as a partial
replacement to cement. For this purpose an experiment is conducted to investigate the possibility of using lime
stone powder in the production of SCC with combined use GGBS and how it affects the fresh and mechanical
properties of SCC. First SCC is made by replacing cement with GGBS in percentages like 10, 20, 30, 40, 50 and
by taking the optimum mix with GGBS lime stone powder is blended to mix in percentages like 5, 10, 15, 20 as
a partial replacement to cement. Test results shows that the SCC mix with combination of 30% GGBS and 15%
limestone powder gives maximum compressive strength and fresh properties are also in the limits prescribed by
the EFNARC.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Key Trends Shaping the Future of Infrastructure.pdf
International Journal of Engineering Research and Development (IJERD)
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 7, Issue 11 (July 2013), PP. 01-09
1
The Corona Effect
André Michaud
SRP Inc Service de Recherche Pédagogique Québec Canada
Abstract:- It can be shown that electron/positron pair production, nucleogenesis and nucleisynthesis could be
much more extensive than assumed in current theories in the Sun's corona. It can also be shown that the extreme
temperatures observed in the corona may be due to nucleon genesis within the corona and that most heavy
elements in the Solar system could be indigenous and could have been produced in the corona by
nucleisynthesis.
Keywords:- nucleogenesis, nucleisynthesis, neutron, proton, electrons, positron, corona, photosphere,
chromosphere, CME, solar winds, 1.022 MeV, pair creation, planetary system, planet, Sun, star.
I. SUMMARY DESCRIPTION OF THE CORONA
The most remarkable feature of the Sun's corona is its extreme temperature, which far exceeds that of
the Solar surface (the photosphere) and of its atmosphere (the chromosphere) which is located between the
lower edge of the corona and the photosphere. While the temperatures of both photosphere and chromosphere
remain fairly constant at ≈5800 o
Kelvin (o
K) up to an altitude of ≈2400 km, it steeply climbs towards the 11000
o
K mark within a rather narrow transition region, to abruptly jump over the 1 million o
K mark at an altitude of
≈2500 km, which marks the lower edge of the corona. Let us note here that 11000 o
K is the temperature of total
ionization of hydrogen.
A. Unexplained Coronal temperatures in the millions o
K
From this point outward within the corona, temperatures of 2 to 3 million o
K are often observed with
frequent way higher peaks. These million+ degrees temperatures in the corona average out at about 200 times
that of the photosphere and chromosphere. This extreme average temperature in the corona is an equilibrium
temperature, which means that the huge energy losses that the corona suffers through constant exchanges with
the chromosphere and outward ejections of material is permanently compensated by some internal coronal
process not yet understood.
This quite abrupt increase in temperature at the chromosphere-corona boundary is accompanied by an
equally abrupt density decrease by many orders of magnitude that sufficiently thins out the coronal material for
it to reach a state of highly energetic collisionless medium, which is the definition of a plasma.
On an 11 years cycle, the shape of the corona oscillates from being a wide crown about the Sun’s
equator to being a completely closed envelope surrounding the Sun.
It is assumed quite logically that the only possible cause of this million+ o
K heat in the corona
somehow has to be due to some process involving the Sun itself. But since none of the satisfactorily
demonstrated heating models in the score that are currently being considered ([1], p. 360, Table 9.2) can account
for more than about 10% of the observed coronal heat, the whole issue remaining essentially unresolved.
The reason for questioning that the actual process could directly involve the Sun, is the fact that it is
impossible in view of the 2nd
principle of thermodynamics that the 5800 o
K heat coming from the photosphere
and chromosphere could explain the 200 fold raise in temperature observed at the chromosphere-corona
boundary. What is happening at the corona-chromosphere interface is as disconcerting as if we observed a pot-
full of water starting to boil by putting it on block of dry ice, which is why the idea that some process internal to
the corona could be at play is to be considered.
Quoting Markus Aschwanden in his excellent textbook "Physics of the Solar Corona": "The physical
understanding of this high temperature in the solar corona is still a fundamental problem in astrophysics,
because it seems to violate the second thermodynamics law, given the much cooler photospheric boundary,
which has an average temperature of T = 5785 o
K" ([1], p.26).
In relation with these extreme temperatures, all atoms present in the corona are ionized, contrary to
those in the chromosphere, which means that all hydrogen atoms are fully ionized in the corona since each of
them has only one electron to be shed. The energy of the free moving electrons in the corona is so great that
permanent capture by positive ions becomes practically impossible.
B. Hundreds of billions of tons of material expelled each day
The corona is a highly fluctuating and inhomogeneous medium, constantly being stirred up by
important upflow and downflow exchanges with the chromosphere. Intense closed magnetic fluxes originating
2. The Corona Effect
2
mainly from the equatorial belt of the Sun that constantly reconfigure it, and open magnetic fluxes from the
poles, cause of the solar winds, constantly expelling hundreds of billions of tons of ionized material each day
from the outer edges of the corona, to migrate into the whole Solar system.
II. OVERABUNDANCE OF ELEMENTS IN THE CORONA
The variety of elements that can be found in atomic state in the corona is largely similar to that of the
photosphere and general cosmic distribution as confirmed by comparing coronal spectral analysis with meteorite
analysis.
C. Three fold overabundance of detected metals
One fact of particular interest is that most of the metals detected, particularly sodium, magnesium,
aluminum, iron and nickel, seem to be about 3 times more abundant in the corona and solar winds than in
the Sun's photosphere ([1], p. 31, Table 1.2)! This second fact, besides the 200 fold higher temperature of the
corona with respect to the photosphere and chromosphere, strengthens yet more the possibility that some
process internal to the corona could be at play, not directly involving the Sun itself.
Current instruments sensitivity prevents being as affirmative for the other elements, so we do not have
much data on the relative abundance of the most other elements with respect to the photosphere.
D. Two thousand fold overabundance of Helium
There is a notable and intriguing exception regarding helium however, the observed overabundance of
both isotopes (He3 and He4) during coronal flares, reaches the astonishing figure of 2000 times ([48], p. 499,
Table 11.3). Could such flares be a particularly favorable circumstance for nucleisynthesis of lighter elements
(see further on), or do they simply allow making more obvious such a possibly general helium overabundance in
the corona?
Elements with atomic numbers 31 and more cannot currently be clearly detected in the corona with
current instruments. But there is no doubt that all elements of the periodic table can be found in the corona since
they all are detected up to and including uranium in the photosphere with which it has constant material
exchanges.
E. All stars have coronas
Elsewhere in the Universe it was found that all stars that have been examined with x-ray telescopes
also have a corona, some belonging to young stars being much more active than Sun’s. So, coronal activity
seems to be a universal process accompanying each star.
Now that we have put in perspective the main unexplained features of the corona, that is its extreme
temperature and confirmed overabundance of practically half the elements that can be detected in it (lacking
sufficient data about the other elements), let us analyze these features in light of other known facts.
III. POSITRON PRODUCTION IN THE CORONA
In his excellent textbook, Aschwanden puts in perspective positron production in the corona by β-
decay from radioactive ions as shown by Pauli in 1930 ([1], p.631, Table 14.3). But quite astonishingly, no
mention whatsoever is made of the positron materialization process from photons of energy 1.022+ MeV
decoupling into electron-positron pairs discovered by Blackett and Occhialini in 1933 ([5]). An analysis of the
mechanics of electron-positron pair production in the 3-spaces model is carried out in a separate paper ([7]).
This very well documented process of photons of energy 1.022+ MeV easily converting to pairs of
electron/positron when grazing atomic nuclei ([3], p.17), is also very well known and understood in high energy
accelerator circles.
→ e-
+ e+
(8)
It was also exhaustively demonstrated that positrons and electrons are totally identical, except for the
sign of their charges, both particles having the exact same invariant rest mass of 9.10938188E-31 kg, or 0.511
MeV/c2
, which is exactly half the energy of the lowest energy photon that can convert to a pair of these particles.
It is well established that if a photon being converted possessed more than 1.022 MeV to start with, the
energy in excess of the two quantities of 0.511 MeV/c2
making up the rest masses of the particles, directly
determines the relative velocities in opposite directions of both particles in space after materialization ([2], p.
174).
It was also experimentally confirmed in 1997 by a team led by Kirk McDonald, at the Stanford Linear
Accelerator (SLAC) that electron/positron pairs can also be produced by converging concentrated beams of
sufficiently energetic photons towards a single point in space, which means that these photons succeed in
destabilizing each other when they are forced close enough to each other, just like they are destabilized by
passing close to a heavy nucleus.
3. The Corona Effect
3
F. Abundance of 1.022+ MeV photons in the corona
Now why is this very well known and documented electron-positron genesis process not even
mentioned in any paper or textbook concerning the corona, considering that photons of energy 1.022+ MeV
occur as a practical continuum in the corona? This is a question that I have no answer for.
IV. HYPOTHESIS OF NUCLEON GENESIS IN THE CORONA
Before analyzing a process that could explain the million+ o
K temperatures ambient in the corona,
there is need to put in perspective a little documented high yield adiabatic energy induction process.
Beyond the well documented first stage process consisting in the conversion of massless photons into
pairs of massive electrons-positrons, there also exists a little known and practically unexplored related second
stage higher mass genesis process, the first preliminary hints of which were theorized about in the 1950's, and
about which not a word is mentioned either in Aschwanden's textbook, but which is described by M. Haïssinsky,
Director of Research at the C.N.R.S. in Paris in the 1950's, in his book "La chimie nucléaire et ses
applications".
According to him, it had been theoretically demonstrated that metastable combinations of 2 positrons +
1 electron, or alternately 2 electrons + 1 positron show some stability, but that it is much less than that of
positronium, and that no experimental verification had been carried out at the date of publication to explore
what happens when acceleration kicks in ([3] , p. 33).
Theoretically, this process could involve the following relations:
e-
+ 2e+
→ p + 3 → p + nπo
+ nπ±
(9)
and
2e-
+ e+
→ n + 3 → n + nπo
+ nπ±
(10)
It is well established experimentally that π mesons (made up of up and down quarks, which are
marginally more massive than electrons) can routinely be created from head-on collisions of two beams of
electrons and positrons ([4]) and that even way more massive baryons (protons and neutrons) also are customary
byproducts of such head-on scattering of beams of way less massive electrons and positrons ([6]).
It was routinely observed and extensively studied in high energy accelerators that when beams of
electrons and positrons are collided head on with sufficient energy, a variety of particles more massive than the
colliding electrons and positrons systematically materialize as a function of the quantity of energy liberated
during such scattering events, including protons and neutrons.
The following set is mentioned as more specifically observed:
2e-
+ Kinetic Energy → p + 4e-
+ 2e+
+ nπo
+ nπ±
+ other particles (11)
These experimental observations will be discussed more deeply further on.
A complete analysis of the mechanics of neutron and proton production in the 3-spaces model from
triads of electrons and positrons is carried out in a separate paper ([8]). This process can be summarized as
follows:
Considering the presence of 2 electrons plus 1 positron thermal enough and close enough to each other
to meta-stabilize into a closed system before inevitably decaying (that is, starting to adiabatically accelerate
inwards towards their center of mass), we observe that we are dealing with two electrons that repel each other
while both are simultaneously being attracted to the same single positron.
For such metastable triads to form, the particles have to be in very low thermal state, which leaves
them with not enough energy to escape from each other after initial mutual metastable capture, a metastable
system deemed to decay even faster than positronium, according to Haïssinsky's description.
Extensive experiments with positronium, a metastable system made up of an electron-positron pair
forced into a volume smaller in diameter than 2.116708996E10-10
meter ([9], p.323) with insufficient energy to
escape mutual capture, shows that positronium eventually decays by dematerializing into 2 or 3 photons whose
sum of energies carry away the total amount of energy that made up the masses of both initial particles.
Conversion of such pairs into single 1.022 MeV photons has even been observed ([8], Section B).
But when 2 thermal electrons and 1 thermal positron are captured in such a common low energy system
as described by Haïssinsky, they find themselves in a unique situation since all three particles are elementary,
which means that none of them can be split to then form two pairs that could join as positronium double-particle
systems that could then decay into massless photons.
Of course, as decay proceeds in their attempt to join anyway, the particles will adiabatically accelerate
as they start translating about their common center of mass, the two electrons will obviously repel each other
more and more strongly as they get closer while being simultaneously more and more strongly attracted to the
single positron. In the 3-spaces model, the end result of this process is the production of neutron plus three high
energy photons.
4. The Corona Effect
4
Indeed, if down quarks actually are electrons being constrained into showing up as down quarks with
fractional charges when confined into the structure of a nucleon, they fundamentally remain the same
elementary particles, but in a slightly more massive form and with diminished electric field and increased
magnetic field caused by the energy drift toward magnetic state, an energy drift due to the very tight closed orbit
that they are constrained into. The same would of course apply to up quarks being similarly constrained
positrons.
Then of course, the fact that up and down quarks could never be observed moving freely out of
destructively scattered nucleon finds a quite simple explanation in the 3-spaces model, because as soon as they
are liberated from constraining electromagnetic environment of the nucleon structure during such destructive
scattering events, the electrons and positrons involved would immediately recover their full unrestricted unit
charge and usual rest mass!
The detailed description of this possible mechanics of nucleon genesis (nucleogenesis) by adiabatic
acceleration of electrons and positrons far exceeds the scope of the present paper, but is fully exposed in a
separate paper with complete theoretical support ([8]). Similarly, the magnetic drift responsible for the
diminished electric charges of elementary particles forced into closed orbits is analyzed in another separate
paper ([11]).
As a final step of this adiabatic acceleration process, leading to nucleon production from such triads, a
final stable state is reached, at which point the three particles display slightly increased masses and diminished
charges, where it becomes impossible for the particles (2 electrons "now down quarks" and 1 positron "now up
quark") to approach any closer due to the magnetic repulsion between the various components in motion, a
repulsion ending up exactly counterbalancing the electrostatic attraction. This equilibrium state is described in
separate paper ([2]).
The analysis shows that as the three particles stabilize at a translation radius of the order of 1.2E-15 m,
an energy of about 310 MeV is continuously being induced for each quark of the triad. This means that when
that final state of the shrinking triad formation is reached, three extremely energetic bremmsstrahlung photons
of about 155 MeV each have to be emitted to evacuate the energy accumulated during the adiabatic acceleration
process in excess of the amount required at the final rest state distance ([8], Section VII and VIII), most
probably immediately converting to π mesons, leaving behind only the maintenance energy perpetually induced
at this distance.
V. NUCLEOGENESIS BREMMSSTRAHLUNG ENERGY IN THE CORONA
Since such large quantities of 1.022+ MeV photons are constantly present in the corona, the conditions
seem potentially appropriate for triads of electrons and positrons to occasionally find ideal conditions for mutual
capture in systems that eventually decay to become massive quantities of protons and neutrons. The question is
then what would be observed in the corona that could support such an expectation?
In a recent paper ([8], Section VIII) a thorough analysis was carried out, describing how electrostatic
acceleration at the fundamental particles level induces extra energy when an electron is captured for the first
time by a proton after it was created by pair production from the conversion of a 1.022 MeV photon, a process
that does not violate the Principle of conservation of energy, given that this first acceleration after creation of the
electron is an irreversible adiabatic process ([8], Section VI).
Similarly, when 2 electrons plus 1 positron mutually capture in a metastable system and accelerate
adiabatically inwards in the system for the first time of their existence to end up as a neutron, now possessing
600 times more energy than the three original particles, that is 939,56533 MeV/c2
, after having stabilized at a
translation radius of 1.2E-15 m, kinetic energy of about 310 MeV has to be continuously induced for each quark
of the triad.
So, quite logically, when the final state of the shrinking triad formation is reached, three extremely
energetic bremmsstrahlung photons of about 155 MeV each are emitted as the positrons and electron (now up
and down quarks) stabilize on their final orbits, carrying away the excess unidirectional kinetic energy that the
three particles accumulated during acceleration. This makes for a total amount of 155 x 3 = 465 MeV energy
being released for each nucleon thus created.
G. Nucleogenesis driven 227 fold increase in ambient energy
As already analyzed, the threesomes of electrons and positrons that will end up associating and
accelerating into stabilizing as each of these nucleons can theoretically be seen as having been generated from
two 1.022 MeV photons splitting into two pairs, for a total of 1.022 x 2 = 2.044 MeV initial energy.
So, on top of ending up with either one proton plus one free electron, or one neutron plus one free
positron, each nucleon creation event from two 1.022 MeV photons causes an increase in ambient energy of
465 ÷ 2.044 = 227.5 times the energy that was present as the triad acceleration process began, which falls
exactly into the energy increase range observed in the corona!
5. The Corona Effect
5
H. Quantities of nucleogenesis mesons detected in the corona
It goes without saying that all three 155 MeV photons thus produced are more than likely to
immediately convert to pions, since they come into being in the immediate vicinity of the massive nucleon from
which they are escaping, the apparently normal destabilization mode of such high energy photons logically
being converting to the most massive transient particle that can be produced, they most certainly will
immediately convert to π mesons.
Any one of the following combinations is likely to be stochastically produced with each nucleon
generated:
e-
+ 2e+
→ p + 3 → p + X (12)
and
2e-
+ e+
→ n + 3 → n + X (13)
Where X can take any one of the following values:
3πo
; 2πo
+ π-
; 2πo
+ π+
; πo
+ π-
+ π+
; πo
+ 2π-
; πo
+ 2π+
; 3π-
; 3π+
; 2π-
+ π+
; 2π+
+ π-
These mesons have been observed in abundance in the corona and are known to quickly decay into
final states that always turn out to be more gamma photons and electron-positron pairs, creating in the process
highly energetic electrons and positrons, and also gamma photons most of which exceeding the 1.022 MeV
decoupling threshold ([1], p. 632)!
Neutral mesons (πo
) have an initial rest mass of 135 MeV/c2
while charged mesons (π-
and π+
) have an
initial rest mass of 139 MeV/c2
. So from the 155 MeV photon that the meson is produced from, the latter carries
on with the very high kinetic energy of respectively 20 MeV and 15 MeV.
I. Quantities of extra e+
and e-
produced from mesons decay
Neutral π mesons are known to practically always decay into a pair of equal energy 67 MeV photons,
and occasionally as a pair of electron and positron plus one photon carrying the remainder of the energy,
meaning that any excess kinetic energy that the meson may have if it converts to two photons is taken up by the
particle whose interaction with the meson caused its conversion to photons energy:
πo
→ 2 (14)
or
πo
→ e-
+ e+
+ (15)
On their parts, π-
and π+
charged mesons generally decay first into like-signed muons and finally into
like-signed electrons or positrons plus corresponding neutrinos:
π-
→ -
+ anti-µ → e-
+ anti-e (16)
and
π+
→ +
+ µ → e+
+ e (17)
This implies that if such a nucleon genesis process is frequent in the corona and possibly even was the
main source of energy in the corona, sizable quantities of high energy electrons and positrons would not need to
be accelerated to the high energies that are being observed, since they would come into being already displaying
high to extreme levels of energy! Which of course does not preclude further acceleration by the already studied
means to the even higher energies also observed ([1], p. 613).
VI. ABUNDANCE OF TRIGGERING 1.022+ MEV PHOTONS
J. Thermalization of energetic electrons and positrons
It must be very clear by now that for any threesome of electrons-positrons to mutually capture as a
metastable system before accelerating to form a nucleon, they need to be thermal to start with or else they would
simply scatter off each other or recombine as pairs to reconvert to a few photons if any electron links up with
just one positron (the well known positronium decay process). So what is needed for very low relative energy
thermal electrons and positrons to come about in the corona is a process that would cause these high velocity
electrons and positrons to slow down sufficiently.
We know from observation that such slowing down of electrons in the corona is quite frequent. In fact
the low energy photons detected due to free-free emission is possibly the most important observation tool in
studying the corona ([1], p. 42). So this is an interesting possible source of thermal electrons.
Free-free emission is the process by which an electron loses energy as a bremmsstrahlung photon as it
is deflected by a proton with too much velocity to be captured, which is the usual type of encounters in the
corona, given that all atoms presents are highly ionized.
6. The Corona Effect
6
K. Creation of already thermal pairs
The best candidates however would be actual photons of energy 1.022 MeV or slightly higher energy
since their decoupling would leave no excess energy for the created pair to move very far away from each other.
Such a pair would then appear at a practical dead stop with respect to each other. If perchance either a thermal
electron or positron happens to be near enough at this precise moment, a mixed threesome could immediately
metastabilize and the inward adiabatic acceleration process would be triggered.
L. Verified creation of thermal pairs in the corona
The fact is that large amounts of photons in the 10 keV to 10 MeV photons have been observed in the
corona. Every time a large flare occurs from the Sun, such photons are emitted by the chromosphere as particles
are accelerated to sufficient energy to interact with atomic nuclei falling back into the denser chromosphere
where a number of collisions produce gamma photons in the proper range, amounting to a continuous emission
by a number of processes: electron bremmsstrahlung, nuclear de-excitation, neutron capture, positronium
annihilation or pion decay radiation ([1], p. 42).
Since starting from the 1.022 MeV energy level threshold, photons are very sensitive to decouple into
electron-positron pairs, there is no doubt that a large number of them grazing highly ionized atomic nuclei in the
corona will actually decouple in a thermal state ready for re-combination with a relatively thermal electron or
positron that would happen to be in the immediate vicinity.
So observation reveals that the well known pair production process is guaranteed to occur in the corona
by the presence in the appropriate thermal state of the particles required to initiate a triad production process,
thus providing an increase in ambient energy level more than 200 fold just as observed and systematic
production of ultra high velocity electrons and positrons just as is also observed in the corona.
VII. NUCLEISYNTHESIS IN THE CORONA
M. Continuous nucleon genesis by low level chain reaction
The question now is: once initiated from pair productions due to free moving 1.022 MeV photons
decoupling from the first large flare after a star ignites, could such a nucleon genesis process be self-maintaining
in the corona? Some sort of low level non-explosive chain reaction!
From observation of the continuous existence of coronas about the Sun and other stars, it would seem
so if such nucleon genesis really is the explanation of the extreme temperature observed in coronas. The actual
mechanics of self-maintenance remains to be clarified, which would involve the numerous second generation
high energy gamma photons being emitted during the decay process of the innumerable pions produced.
We could certainly speak of nucleisynthesis already with the creation of protons from accelerating
threesomes made up of thermal electrons and positrons, since protons are hydrogen nuclei. But what about the
more massive nuclei of the periodic table that can also be found in such overabundance in the corona?
N. Protons and neutrons produced in statistically equal numbers
Let us note here that statistically speaking the chances for a neutron to be created by an initial
threesome acceleration process are exactly equal to those of a proton, which means that statistically equal
numbers of neutrons and protons are likely to be produced if the process is repeated. What is more, all of them
will be thermal by definition, practically appearing at a dead relative stop at the location of creation since the
three particles that accelerate transversally to make them up have to be thermal to start with.
O. Production of all elements favored by the presence of crowds of free thermal nucleons
Since crowds of thermal protons and neutrons are likely to rather often come close together in the
coronal plasma, nucleisynthesis of lighter atoms such as helium, lithium and other lighter elements would not
really be surprising given the presence of so many of the required free thermal neutrons and protons. These
lighter nuclei being partially if not completely ionized definitely stand a chance of converting to higher number
nuclei, here again due to the presence of so many free moving neutrons, protons and other light ionized nuclei
being available in the coronal plasma.
Could such nucleisynthesis be the cause for the 3-fold overabundance of these light elements noted in
the corona and solar wind with respect to the Sun's photosphere? The probability is of course very high, a
process that must have been going on ever since the Sun ignited, since we can assume that the corona has
existed from that moment on and which must have produced countless trillions of tons of new atoms covering
the complete spectrum of the periodic table of elements!
P. Experimental proof of continuous production of elements in the corona by absorption of neutrons
Now, are there identifiable signs that neutron absorption driven nuclei building does occurs in the
corona? The answer is yes. Given that the bremmsstrahlung photon emitted when a neutron is captured has a
very narrow characteristic value of 2.223 MeV, it is quite easy to identify it in radiation spectra. This very
narrow gamma ray line is indeed often quite prominent in high energy spectra of the corona ([1], p. 629 and p.
7. The Corona Effect
7
34, Figure 1.25) meaning that considerable capture events by protons frequently occur and that neutrons are
plentiful in the corona.
Neutron capture by heavier nuclei involving a wide range of bremmsstrahlung photon energies is less
easily identifiable in the coronal spectra (with capture by He3 producing practically no energy) but since so
many neutrons obviously are available for capture by protons, there is no doubt that they also are just as easily
available for capture by larger nuclei.
Current models assume that the whole population of free neutrons observed has to be produced by
destructive scattering of highly accelerated ions of carbon, nitrogen, oxygen, iron, etc., scattering highly
energetic free neutrons off these nuclei, which implies that they have to first be slowed down to allow capture
by protons and higher mass nuclei ([1], p. 630), but if they were produced by nucleogenesis as analyzed here,
they would be plentiful in the corona already thermal enough at the very moment of production to be
immediately available for capture.
VIII. THE BIRTH OF PLANETARY SYSTEMS
Q. The Solar winds
We mentioned earlier that solar winds are constantly expelling hundreds of millions of tons of ionized
material from the outer edges of the corona away to migrate into the whole solar system. The material thus
expelled can be sent even way beyond Pluto, as far as the heliopause, at about 100 times the distance from the
Earth to the Sun, which is the frontier at which the pressure of the solar winds starts falling into equilibrium with
the pressure of particles coming from interstellar space. So let’s now clarify the nature of these “solar winds”.
Solar winds are still being analyzed as their mechanics is not yet fully understood, but they are known
to be driven by the magnetic field of the Sun. The stronger component, the fast wind, which is known to be
driven by the open magnetic fluxes originating from both poles of the Sun while a weaker component (the slow
wind) operates mainly from the equatorial region where magnetic fluxes are observed to be mostly closed.
The fluxes issuing from the poles are termed “open” because they are not observed to be folding back
towards the Sun as any magnetic flux must do. It seems obvious however that the magnetic “lines” issuing from
the north pole of the Sun have to eventually loop back to re-enter the Sun’s south pole, otherwise there would be
contradiction with Maxwell’s equations. They most certainly loop back possibly as far as the heliopause maybe
without us being able to directly verify for the moment on account of the distances involved.
R. Expulsion of 6.7 billion tons de material per hour
It has been calculated that solar winds expel a steady flow in all directions from the outskirts of the
corona of about 6.7 billion tons of material per hour ([1], p. 703) at typical initial velocities varying from 1.44 to
2.88 million kilometers per hour ([1], p. 167) which means that it takes only about 150 million years for the
equivalent of the total mass of the Earth to be expelled!
Textbooks and other references lead to think that it is not yet clearly understood why ionized particles
being carried outward by the solar winds acquire such high ejection velocities as they reach a distance of about 5
solar radiuses from the Sun's photosphere.
But since all particles in the masses of material being carried away in the Sun’s strong magnetic field
are ionized, thus charged, and moving in the same direction in a practically locally parallel straight lines away
from the Sun, the Lorentz law
BvEqF (18)
mandates that a macroscopic electric field obeying the relation v=E/B has to come into being to
account for all these particles moving in a straight line away from the Sun. There simply exists no possibility for
charged particles to move in a straight line at any velocity whatsoever if this E/B equilibrium is not locally
established, involving equal local density for both electric field (E) and magnetic field (B).
What most probably happens is that once some massive amount of ionized (thus charged) particles are
forced by whatever circumstances to start moving in the same direction away from the Sun in a directed
magnetic field, their individual electric charges can only add up to constitute some local macroscopic electric
field at the same scale as the ambient magnetic field. See in this regard an analysis of the Einstein-de Haas and
Barnett effects carried out in a separate paper ([10]).
A similar idea was already proposed by Kaoru Takakura in 1988 ([1], p. 499, sub-ref: Solar Physics
Journal, No.115, p.149) involving stochastic electromagnetic wave-particle interaction processes, the
acceleration occurring at the particles scale to average out at the macroscopic scale.
Such an electric field has no choice but to establish itself normal to the ambient magnetic field and also
to the direction of motion of the mass of particles, since that in accordance with Maxwell and Lorentz, any
charged particle moving in a straight line can do so only perpendicularly to a plane defined by a magnetic field
itself perpendicular to an electric field, with both fields having equal density and whose intensity determines the
velocity of the particle, being themselves perpendicular to the direction of motion of this particle. There simply
exists no other possibility.
8. The Corona Effect
8
S. Coronal Mass Ejections (CME)
The velocity of the massive amounts of particles involved will have no choice but to adjust to the
intensity of the macroscopic E/B equilibrium being established to sustain their straight line motion outwards.
Such an adjustment, thus acceleration, would no doubt tend to be progressive over a measurable period of time
as the global electric field grows towards this global E/B normal alignment, which is precisely what seems to
have been measured regarding CMEs, which we will now discuss ([1], p. 721).
T. CMEs expel each day up to 125 times more material than solar winds
Besides the steady outflow of material due to solar winds, cataclysmic events named Coronal Mass
Ejections (CMEs) typically occur a few times each day, sending out from 100 billion to 10 trillion tons of
material each time at velocities ranging from 360 000 km/h to 7.2 million km/h ([1], p. 703). This means that on
average, CMEs expel each day from 2 to 125 times more material than the steady daily solar wind outflow of
material!
There is proven evidence that all CME processes are initiated at the outskirts of the corona just like
solar winds ejections, that is, nowhere near the transition region with the Sun's chromosphere ([1], p. 731) which
confirms that they are not driven by the more energetic activity of the lower corona.
If we set for CMEs a conservative average of 30 times more material ejected than the steady solar wind
outflow, this means that combining both ejection processes, only 5 million years is required for the equivalent of
the total mass of the Earth to be expelled outwards from the corona to spread out into the solar system! And this
process has presumably been going on ever since the Sun apparently ignited an estimated 4.5 billion years ago!
U. The total mass of the whole planetary system ejected in less than 2,275 billion years
Considering that the combined mass of our planetary system, including the Kuiper belt material and
other inner heliospheric materials, amounts to about 455 Earth masses, it would have taken only about 2,275
billion years for an equivalent amount of mass to have been ejected from the corona into the heliosphere!
Since coronas about younger stars have been observed to be much more energetic than ours, it seems
probable that this might also have been the case for our own corona. So the time for this amount of mass to have
been ejected from the corona when the Sun was still young may have been far shorter, maybe less than 500
million years. These ballpark figures are of course approximate, but most probably of the right order of
magnitude.
Now what are we to make of these figures?
V. All of the matter in the planetary system could have been generated in the Corona
It has been theorized up to now that the heavier elements present in the planetary system must have
been formed as supernovae exploded elsewhere in the universe and must have migrated somehow 4 billion years
ago to become available to eventually make up the planets of our system.
There is little doubt that supernovae do eject countless billions of tons of all elements as they explode,
but we just saw that if general nucleon genesis really occurs in the corona, rather convincing telltales for which
being the million+ o
K temperatures and confirmed overabundance of the metals detected in the corona with
respect to the Sun's chromosphere, there exists a real possibility that all elements in the solar system could be
indigenous, except for the initial hydrogen cloud that eventually condensed in an accumulation that eventually
reached critical stellar ignition mass causing our Sun to become a star.
If such is the case, it is more than likely that due to their higher mass, the heavy ions formed in the
corona would have tended to be expelled to lesser distances from the Sun than the lighter elements by slow
CMEs and by the slow solar wind that dominates on the plane of the ecliptic.
This may very simply explain why the inner planets, Mercury, Venus, Earth, Mars and the asteroid belt,
are way more dense than the outer gas giants, which on their side may generally have been the product of high
velocity CMEs and fast solar wind, even in which heavier ions could possibly also have been mostly sent to
lesser distances than lighter elements due to their larger masses. This would also explain why the densest planet
is closest to the Sun with the others generally becoming less massive with distance.
Such a possibility would greatly simplify understanding planetary systems creation in the universe
since there would be no more need to invoke the rather farfetched hypothetical creation of all heavy elements in
the universe only from the after all rather rare supernova explosions option.
W. Every star can develop a planetary system
Finally, such nucleisynthesis of all elements in our corona combined to the confirmed presence of
coronas accompanying all observed stars would confirm the hypothesis that all stars mandatorily eventually
develop a planetary system.
Also, it is estimated that the Sun came into active star state about 4.6 billion years ago, although such a
figure really is guesswork and could be highly underestimated. What we are certain of is that it is older than the
Earth whose oldest rock identified today has been estimated to be over 4 billion years old. But working with this
9. The Corona Effect
9
4,6 billion years old ballpark figure would mean that from that moment on till today, the equivalent of 2 to 10
times the mass of the whole planetary system has been ejected from the corona.
Even considering the very conservative 2 times estimate, the question comes to mind as to what
happened to the extra material ejected, which is the equivalent of the current mass of our planetary system, that
is, 455 Earth masses?
There logically can be only one answer to this question. Just as there exists intense exchanges of
material at the chromosphere-corona boundary, there must exist similar exchanges at the heliosphere-galactic
boundary, possibly sending that material as far as the Oort cloud and possibly even contributing material to that
heliocentric spherical accumulation of material, otherwise all of that material would still be found within the
heliosphere.
This also means that all stars must have come (and still are coming) into being with no planetary
system after condensing from some part of the primordial hydrogen plenum, itself possibly having begun with a
few primordial high energy photons, that countless eons in the past, mutually destabilized into the first pairs that
then produced the first hydrogen nucleus as they liberated the three mesons that produced the second generation
photons and charged electrons and positrons that kept this irreversible process going.
Planetary systems then progressively came into being as new material was produced in coronas, and
increased in mass over time while extra material was also being ejected into the galactic surroundings. This then
means that the masses planetary systems and galaxies had to increase over time and that the universe has
become progressively more massive over time, a process that would still be ongoing.
IX. CONCLUSIONS
If nucleon genesis was confirmed as occurring in the corona, this would provide a direct answer to the
extreme temperatures issue of the corona.
If nuclei synthesis in the corona of elements more massive than hydrogen was confirmed, this would
give substance to the hypothesis that all elements in the Solar system could be indigenous and that all stars
eventually develop a planetary system.
REFERENCES
[1] Markus Aschwanden. Physics of the Solar Corona, Springer, 2006.
[2] André Michaud. On the magnetostatic Inverse cube law and magnetic Monopoles. International
Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN: 2278-800X,
www.ijerd.com. Volume 7, Issue 5 (June 2013), PP.50-66. (http://www.ijerd.com/paper/vol7-
issue5/H0705050066.pdf).
[3] M. Haïssinsky. La chimie nucléaire et ses applications, France, Masson et Cie, Éditeurs, 1957.
[4] G. Goldhaber et al, Observation in e+ e- Annihilation of a Narrow State at 1865 MeV/c2
Decaying
to Kπ and Kπππ, Phys. Rev. Let. Vol. 37 No.5, 255 (1976).
[5] Blackett, P.M.S & Occhialini, G. (1933) Some photographs of the tracks of penetrating radiation,
Proceedings of the Royal Society, 139, 699-724.
[6] Hanson, G, Agrams G.S. et al. (1975) Evidence for Jet Structure in Hadron Production by e+ e-
Annihilation. Phys. Rev. Let., Vol. 35, No. 24, 1609-1612.
[7] André Michaud. The Mechanics of Electron-Positron Pair Creation in the 3-Spaces Model.
International Journal of Engineering Research and Development, e-ISSN: 2278-067X, p-ISSN: 2278-
800X, www.ijerd.com Volume 6, Issue 10 (April 2013), PP. 36-49. (http://ijerd.com/paper/vol6-
issue10/F06103649.pdf).
[8] André Michaud. The Mechanics of Neutron and Proton Creation in the 3-Spaces Model.
International Journal of Engineering Research and Development. e-ISSN: 2278-067X, p-ISSN : 2278-
800X, www.ijerd.com. Volume 7, Issue 9 (July 2013), PP.29-53. (http://ijerd.com/paper/vol7-
issue9/E0709029053.pdf)
[9] Walter Greiner & Joachim Reinhardt, Quantum Electrodynamics, Second Edition, Springer Verlag
New York Berlin Heidelberg, 1994.
[10] André Michaud. On the Einstein-de Haas and Barnett Effects. International Journal of Engineering
Research and Development. e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com Volume 6,
Issue 12 (May 2013), PP. 07-11. (http://ijerd.com/paper/vol6-issue12/B06120711.pdf).