The document proposes new concepts and products related to modeling telecommunications networks and billing systems using a quantum definition of information. It puts forth the idea of "infotons" which are proposed as elementary particles of information that could be analogous to Higgs bosons. A process model is presented for a telecom provisioning system that aims to optimize workflows using this information definition. Three new proposed products emerge from this: an intelligent capacity agent, an information-based billing system, and an "infoton switch." The document speculates on relationships between information, events, and time, and whether infotons and Higgs bosons could be two manifestations of the same phenomena.
Design and Power Measurement of 2 And 8 Point FFT Using Radix-2 Algorithm for...IOSRJVSP
In Cooley–Tukey algorithm the Radix-2 decimation-in-time Fast Fourier Transform is the easiest form. The Fast Fourier Transform is the mostly used in digital signal processing algorithms. Discrete Fourier Transform (DFT) is computing by the FFT. DFT is used to convert a time domain signal into its frequency spectrum domain. FFT algorithms uses many applications for example, OFDM, Noise reduction, Digital audio broadcasting, Digital video broadcasting. It’s used to design butterflies for different point FFT. In this paper given to design and power measurement 2 and 8 point FFT by using VHDL. Simulation and synthesis of design is done using Xilinx ISE 14.2
Review on low power high speed 32 point cyclotomic parallel FFT ProcessorIRJET Journal
This document reviews a low-power 32-point cyclotomic parallel fast Fourier transform (FFT) processor. FFT is a technique that efficiently calculates the discrete Fourier transform by reducing the number of operations. Cyclotomic FFT breaks the DFT into several convolutions to reduce the tradeoff between area and performance seen in other FFT algorithms. The paper proposes a design for a 32-point radix-4 FFT using VHDL that combines the advantages of cyclotomic pipelining and parallel multipliers to improve throughput and power efficiency. Previous research on low-power FFT architectures is discussed, focusing on techniques like pipelining, parallel processing, and multiplier-less designs.
High speed customized serial protocol for IP integration on FPGA based SOC ap...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
IRJET- VLSI Architecture for Reversible Radix-2 FFT Algorithm using Programma...IRJET Journal
This document describes the design of a reversible radix-2 FFT algorithm using programmable reversible gates. It begins with background on the discrete Fourier transform and fast Fourier transform. It then discusses previous work using reversible Peres and TR gates. The main contribution is a proposed method for implementing the radix-2 FFT using a reversible DKG gate. Simulation results for 8-point, 16-point and 32-point FFTs are presented, showing the structure was implemented on a Xilinx FPGA. The reversible FFT design reduces power consumption compared to traditional irreversible implementations.
Comparison study of 8-PPM, 8-DPIM, and 8-RDH-PIM modulator FPGA hardware desi...journalBEEI
In this paper, a performance study of 8-Pulse-Position Modulation (PPM), 8-Digital Pulse Interval Modulation (DPIM), and 8-Reverse Dual Header-Pulse Interval Modulation (RDH-PIM) implementation in Verilog hardware design language is presented. The hardware design is chosen over software design since it could provide much more flexibility in term of transmission rate and reduce the workload of the processor in the complete system. Using 50 MHz clock as the reference data clock speeds, the transmission rate recorded are 11.11 Msymbol/second or 33.33 Mbps, 9.09 Msymbol/s or 27.27 Mbps, and 6.25 Msymbol/s or 18.75 Mbps for 8-RDH-PIM, 8-DPIM, and 8-PPM respectively. We conclude that 8-RDH-PIM modulator design provides better performance in term of bandwidth utilization and transmission rate as compared to 8-PPM and 8-DPIM.
The document provides information about IP-FLOW, a project that uses Time Driven Priority (TDP) to provide quality of service guarantees for real-time traffic over packet-switched networks. It describes the need for QoS, how TDP works by giving periodic priority to real-time packets, the network elements like TDP routers and switches, and details about the Milan testbed including local and wide area tests to evaluate the system.
FPGA based Efficient Interpolator design using DALUT Algorithmcscpconf
The document describes the design and implementation of an efficient interpolator for wireless communication systems using FPGA. It proposes a multiplier-less technique using distributed arithmetic look-up tables (DALUT) that replaces multiply-accumulate operations with LUT accesses. A 66th-order half-band polyphase FIR structure is implemented using the DALUT approach on Spartan-3E and Virtex2Pro FPGAs. Results show the proposed design achieves maximum frequencies of 92.859MHz on Virtex Pro and 61.6MHz on Spartan 3E while consuming fewer resources than a traditional MAC-based design.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Design and Power Measurement of 2 And 8 Point FFT Using Radix-2 Algorithm for...IOSRJVSP
In Cooley–Tukey algorithm the Radix-2 decimation-in-time Fast Fourier Transform is the easiest form. The Fast Fourier Transform is the mostly used in digital signal processing algorithms. Discrete Fourier Transform (DFT) is computing by the FFT. DFT is used to convert a time domain signal into its frequency spectrum domain. FFT algorithms uses many applications for example, OFDM, Noise reduction, Digital audio broadcasting, Digital video broadcasting. It’s used to design butterflies for different point FFT. In this paper given to design and power measurement 2 and 8 point FFT by using VHDL. Simulation and synthesis of design is done using Xilinx ISE 14.2
Review on low power high speed 32 point cyclotomic parallel FFT ProcessorIRJET Journal
This document reviews a low-power 32-point cyclotomic parallel fast Fourier transform (FFT) processor. FFT is a technique that efficiently calculates the discrete Fourier transform by reducing the number of operations. Cyclotomic FFT breaks the DFT into several convolutions to reduce the tradeoff between area and performance seen in other FFT algorithms. The paper proposes a design for a 32-point radix-4 FFT using VHDL that combines the advantages of cyclotomic pipelining and parallel multipliers to improve throughput and power efficiency. Previous research on low-power FFT architectures is discussed, focusing on techniques like pipelining, parallel processing, and multiplier-less designs.
High speed customized serial protocol for IP integration on FPGA based SOC ap...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
IRJET- VLSI Architecture for Reversible Radix-2 FFT Algorithm using Programma...IRJET Journal
This document describes the design of a reversible radix-2 FFT algorithm using programmable reversible gates. It begins with background on the discrete Fourier transform and fast Fourier transform. It then discusses previous work using reversible Peres and TR gates. The main contribution is a proposed method for implementing the radix-2 FFT using a reversible DKG gate. Simulation results for 8-point, 16-point and 32-point FFTs are presented, showing the structure was implemented on a Xilinx FPGA. The reversible FFT design reduces power consumption compared to traditional irreversible implementations.
Comparison study of 8-PPM, 8-DPIM, and 8-RDH-PIM modulator FPGA hardware desi...journalBEEI
In this paper, a performance study of 8-Pulse-Position Modulation (PPM), 8-Digital Pulse Interval Modulation (DPIM), and 8-Reverse Dual Header-Pulse Interval Modulation (RDH-PIM) implementation in Verilog hardware design language is presented. The hardware design is chosen over software design since it could provide much more flexibility in term of transmission rate and reduce the workload of the processor in the complete system. Using 50 MHz clock as the reference data clock speeds, the transmission rate recorded are 11.11 Msymbol/second or 33.33 Mbps, 9.09 Msymbol/s or 27.27 Mbps, and 6.25 Msymbol/s or 18.75 Mbps for 8-RDH-PIM, 8-DPIM, and 8-PPM respectively. We conclude that 8-RDH-PIM modulator design provides better performance in term of bandwidth utilization and transmission rate as compared to 8-PPM and 8-DPIM.
The document provides information about IP-FLOW, a project that uses Time Driven Priority (TDP) to provide quality of service guarantees for real-time traffic over packet-switched networks. It describes the need for QoS, how TDP works by giving periodic priority to real-time packets, the network elements like TDP routers and switches, and details about the Milan testbed including local and wide area tests to evaluate the system.
FPGA based Efficient Interpolator design using DALUT Algorithmcscpconf
The document describes the design and implementation of an efficient interpolator for wireless communication systems using FPGA. It proposes a multiplier-less technique using distributed arithmetic look-up tables (DALUT) that replaces multiply-accumulate operations with LUT accesses. A 66th-order half-band polyphase FIR structure is implemented using the DALUT approach on Spartan-3E and Virtex2Pro FPGAs. Results show the proposed design achieves maximum frequencies of 92.859MHz on Virtex Pro and 61.6MHz on Spartan 3E while consuming fewer resources than a traditional MAC-based design.
International Journal of Engineering Inventions (IJEI) provides a multidisciplinary passage for researchers, managers, professionals, practitioners and students around the globe to publish high quality, peer-reviewed articles on all theoretical and empirical aspects of Engineering and Science.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
fuzzy control in matlab fuzzy control in matlabDuyKhng31
This document summarizes a study on using interval type-2 fuzzy logic controllers (IT2 FLCs) for position control of a delta parallel robot. The researchers designed several IT2 FLCs by blurring the membership functions of an optimized type-1 FLC. They then evaluated the controllers' performance under different levels of uncertainty, including sensory noise and uncertain system parameters. The results showed that IT2 FLCs provided improved control performance compared to type-1 FLCs when the IT2 fuzzy sets were appropriately designed. However, excessive "type-2 fuzziness" in the design led to degraded performance. The study thus demonstrated the potential benefits of IT2 FLCs for handling uncertainty, but also the importance
The Role Of Software And Hardware As A Common Part Of The...Sheena Crouch
This document discusses the implementation of a software-defined networking (SDN) system using Field Programmable Gate Arrays (FPGAs). It describes an SDN switch core that can modify packet headers based on flow tables and forward packets to different ports. An SDN controller programmed the flow tables and monitored packet flows. Attacker nodes, implemented with a Microblaze processor, transmitted packets to the SDN switch network at programmable rates. The system allowed observation and testing of the SDN switches and network. Hardware and software implementations are discussed to realize the SDN system on FPGAs.
Abstract presentation on feature engineering on streaming data for pyconMayankPrasoon2
This document discusses using semantic graphs to optimize feature engineering for streaming timeseries data in industrial automation. It outlines challenges like high data volume, latency, and interdependence of streaming data. Semantic graphs can represent feature engineering computations as a directed graph, making relationships between concepts explicit. This allows for easier tracing of variables, code refactoring, and introducing parallelization by computing unrelated features simultaneously in separate threads. The document provides a simplified example of representing feature engineering equations as a semantic graph and how parallelization could be applied.
This document provides an overview of current ETL techniques from a big data perspective. It discusses the evolution of ETL from traditional batch-based techniques to near real-time and real-time approaches. However, existing real-time ETL approaches are inadequate to address the volume, velocity, and variety characteristics of data streams. The document also surveys available ETL tools and techniques for handling data streams, and concludes that the ETL process needs to be redefined to better address issues in processing dynamic data streams.
Industrial Ethernet Facts - The 5 major technologiesStephane Potier
This document provides a summary and comparison of five major industrial Ethernet communication systems: PROFINET, POWERLINK, EtherNet/IP, EtherCAT, and SERCOS III. It discusses how each system works, including approaches to real-time communication, organization structures, and criteria for evaluating long-term investment viability. Performance metrics like theoretically achievable cycle times and network load are also analyzed. The document aims to objectively assess the technical and economic characteristics of these industrial Ethernet standards.
This document summarizes a research paper that compares different multiplier designs. It discusses array multipliers, Booth multipliers using Radix-2 and Radix-4 algorithms. Simulation results on FPGA show that Booth multipliers consume less power and area than array multipliers. Specifically, a 16x16 Booth multiplier was found to be more efficient in terms of speed and power consumption compared to other designs. The document provides background on multipliers and power optimization in VLSI systems. It also outlines the methodology used to simulate and compare the multipliers in terms of time delays, gate counts and power usage.
A DevOps Tutorial to Set-up Intelligent Machine Learning Driven AlertsDevOps.com
In this webinar, Sebastien Leger, Founder, and CEO of LoudML, will show you how to use the LoudML machine learning API with your InfluxDB instance to quickly detect anomalies in your time series data that can trigger notifications in Slack or any of your favorite Incident management solutions like Pager Duty, OpsGenie, Victor Ops, or Alerta.
This webinar is organized in 4-parts: Basic setup running Docker, training your first-time series model (no programming needed!), building intelligent triggers and notifications, putting it all into practice as your solution easily detects abnormal data!
This document provides an overview of IT infrastructure components and evolution. It defines IT infrastructure as including all hardware, software, networks, data centers, and related equipment used to develop, test, operate, monitor, manage and support IT services. The key components are described as hardware, software, networks, and human resources. Five eras of IT infrastructure evolution are outlined from electronic accounting machines to the modern enterprise internet. Technology drivers of infrastructure evolution are also summarized, including Moore's Law, storage capacity growth, network effects, declining communication costs, and standards.
Intelligent Devices Delivering The PromisePeter Ashley
1. Intelligent Devices solved problems for NITTEC by making portable traffic signs compatible with their NTCIP system through upgrade modules. This allowed the signs' locations and messages to be monitored from the traffic management center.
2. Intelligent Devices provided interface modules to make Florida DOT's existing traffic signs and cameras compatible with their new NTCIP-based infrastructure as part of a statewide upgrade project.
3. Intelligent Devices provided a congestion monitoring system for a bridge in Tampa that detects backups using radar and warns drivers, helping to reduce rear-end collisions from slow or stopped traffic.
40.Classification of a network as a LAN or a WAN is not relevant to .pdffasttracksunglass
40.Classification of a network as a LAN or a WAN is not relevant to wireless communication
topologies.
1) True
2) False
44. Which of the following is NOT an example of a business term category that should be
addressed in the contract negotiations?
1) License term, renewal, termination
2) Risk-sharing terms
3) Upgrades and enhancements
4) Interfaces and conversions
45. Which of the following computing attributes is essential to conducting analytical research on
the scope of mapping the human genome?
1) High-speed computation capability
2) Robust data encryption
3) Wireless data transmission
4) Expert system interface
50.The market for financial applications software is BEST described as
1) entrepreneurial.
2) niche.
3) concentrated.
4) volatile.
52.Which of the following describes cellular digital packet data?
1) WAN architecture
2) Infrared technology
3) Used in Internet communication
4) Transmitted on a T1 line
57.What is network physical topology?
1) Configuration of computer and peripheral devices
2) Data transmission protocols
3) Network operating systems
4) Placement of router devices
68.Which of the following describes computer-assisted medical instrumentation?
1) Is used for both data capture and data display
2) Has limited application beyond routine monitoring, such as monitoring of vital signs
3) Is a passive technology used primarily in the clinical laboratory
4) Requires hard-wired configuration
71.Which of the following is an example of a utility program?
1) Virus scan software
2) Multiplexer
3) USB port
4) Digital voice translator
Solution
Please follow the data and description :
40) Wireless Network Topology :
It is a logical topology and shows how the computers connect and interact each other when there
is no physical connection, no cables connecting the computers. The computers communicate
each other directly, using the wireless devices. Wireless networks can have infrastructure or ad
hoc topology. So this does not deals with the types of communication networks that is either a
LAN or a WAN.
So the answer is OPTION 2 (False).
44) Contract negotiation :
It is the process of give and take policy that the parties go through to reach an agreement. Or, in
general as they often say in the terms of business, \"you don\'t get what you deserve; you get
what you negotiate.\" So this does involve with the process of risk management, license terms
and the renewal or the termination of it, any changes during the implementation or evne in the
execution process. But this does not go with the process of interfaces and conversion process.
So the answer is OPTION 4 (Interfaces and COnversions).
45) In the process of computing the mapping for the human genome there are some attributers
that needs to be fulfilled during the process so as to achieve the efficient resutls. Some of them
are the highly relaible data, fast computing, data mining, data warehousing and so on.
So the answer is OPTION 1 (High-speed computation capability).
50) Financial a.
A new dynamic speech encryption algorithm based on Lorenz chaotic map over in...IJECEIAES
This paper introduces a dynamic speech encryption algorithm based on Lorenz chaotic map over internet protocol to enhance the services of the real-time applications such as increases the security level and reduces latency. The proposed algorithm was divided into two processes: dynamic key generation process using 128-bit hash value to dynamically alter the initial secret keys, and encryption and decryption process using Lorenz system. In the proposed algorithm, the performance evaluation is carried out through efficient simulations and implementations and statistical analysis. In addition, the average time delay in the proposed algorithm and some of the existing algorithms such as AES is compared. The obtained results concluded that, the proposed dynamic speech encryption algorithm is effectually secured against various cryptanalysis attacks and has useful cryptographic properties such as confusion and diffusion for better voice communication in the voice applications field in the Internet.
Patton-Fuller Community Hospital has been providing medical care to the local community since 1975. As technology has advanced, the hospital now relies heavily on computer networks and digital systems. However, the hospital's current network infrastructure is outdated and in need of improvements to support modern medical equipment and ensure patient data security. Updating the network will require installing new wired and wireless networks, migrating systems to the cloud, and training staff on cybersecurity best practices. The goal is to implement a reliable and secure network to deliver high-quality care now and in the future.
This document discusses packet switching, layered models, and protocol suites used in computer networks. It describes the differences between circuit switching and packet switching. It explains how networking tasks are divided into layers, with each layer providing distinct functions. The document outlines the layers of the Internet protocol stack and OSI model, describing the responsibilities of each layer. It provides examples of common protocol suites like TCP/IP that ensure communication systems are complete and efficient.
The transport layer is responsible for host-to-host communication and provides services like reliable data transfer, flow control, and multiplexing. Common transport layer protocols include TCP and UDP, which provide connection-oriented and connectionless services respectively, and help applications exchange data over network layers. The transport layer sits above the network layer and below the application layer in network stacks.
An overlay network may be a electronic network thats designed on h.pdfannaindustries
An overlay network may be a electronic network that\'s designed on high of another network.
Nodes within the overlay network will be thought of as being connected by virtual or logical
links, every of that corresponds to a path, maybe through several physical links, within the
underlying network. for instance, distributed systems like peer-to-peer networks and client-server
applications ar overlay networks as a result of their nodes run on high of the net. the net was
originally designed as AN overlay upon the phonephone network, whereas these days (through
the appearance of VoIP), the phonephone network is {increasingly|progressively|more ANd
more} turning into an overlay network designed on high of the net.
Overlay networks ar utilized in telecommunication due to the supply of digital circuit change
instrumentality and glass fibre.[1] Telecommunication transport networks ANd informatics
networks (which combined form up the broader Internet) ar all overlaid with a minimum of an
glass fibre layer, a transport layer ANd an informatics OR gate change layers (in the case of the
PSTN).
Enterprise non-public networks were initial overlaid on telecommunication networks like frame
relay and Asynchronous Transfer Mode packet change infrastructures however migration from
these (now legacy) infrastructures to informatics based mostly MPLS networks and virtual non-
public networks started (2001~2002).
From a physical posture overlay networks ar quite advanced (see Figure 1) as they mix varied
logical layers that ar operated and designed by varied entities (businesses, universities,
government etc.) however they permit separation of considerations that over time allowable the
buildup of a broad set of services that would not are planned by one telecommunication operator
(ranging from broadband net access, voice informatics or IPTV, competitive medium operators
etc.).[2]
13
down vote
One reason we have a tendency to clock flip flops so there is not any chaos once the outputs of
flip flops area unit fed through some logic functions and back to their own inputs.
If a flip-flop\'s output is employed to calculate its input, it behooves North American nation to
possess orderly behavior: to stop the flip-flop\'s state from dynamical till the output (and
therefore the input) is stable.
This duration permits North American nation to create computers, that area unit state machines:
they need a current state, and calculate their next state supported the present state and a few
inputs.
For example, suppose we would like to create a machine that \"computes\" AN incrementing
four bit count from 0000 to 1111, and so wraps around to 0000 and keeps going. we are able to
do that by employing a four bit register (which may be a bank of 4 D flip-flops). The output of
the register is place through a combinatorial logic perform that adds one (a four bit adder) to
supply the incremented worth. This worth is then merely fed back to the register. Now, whenever
the clock edge arr.
DISCUSSION 1The Internet of Things (IoT) is based upon emerging .docxelinoraudley582231
DISCUSSION 1
The Internet of Things (IoT) is based upon emerging applications of technologies. A number of security researchers have expressed concerns that this blending of emerging technologies with the Internet will provide new opportunities for cyber criminals and terrorists.
Pick one of the emerging applications of technologies that belongs to the Internet of Things (see the readings and videos for this week) and research how it can be attacked by bad guys. Using the security criteria from the five pillars of information assurance, write a 250+ word analysis of the risks and vulnerabilities associated with your chosen technology. The five pillars are:
1 confidentiality
2 integrity
3 availability
4 authentication
5 nonrepudiation
Remember to cite your sources using APA format in-text citations and include an APA format reference list at the end of your response posting.
DISCUSSION 2
The International Telecommunications Union (2005) report described four categories of technologies which can be linked together to form an Internet of Things.
Enabling Technologies:
1. Tagging things: RFID and similar technologies
2. Feeling things: Sensor technologies
3. Thinking things: Smart technologies
4. Shrinking things: Nanotechnology
Choose one of these categories.
What are the leading technologies in your selected category?
How do they contribute to the Internet of Things?
What types of privacy or security concerns surround the technologies in your selected category?
Your initial posting should be 250+ words and be supported by citations and references in APA format.
Reference
International Telecommunications Union. (2005). ITU Internet reports 2005: The Internet of things [Executive Summary]. Retrieved from http://www.itu.int/osg/spu/publications/internetofthings/InternetofThings_summary.pdf
MATH233 Unit 1 Individual Project
To communicate most effectively, network administrators attempt to maximize bandwidth and throughput speeds to achieve high data transmission rates within the building’s CAT5e cables. These performance data transfer rates are given in multiples of unit bits per second (bps). In the table below, the school’s IT department estimates the network throughput that is required in the near future. The network manager has asked you to use these data to analyze the current traffic load and the network's overall capacity.
Estimates given in the table below suggest that your network throughput can transfer Gigabits (Gb) of data in seconds for each user.
For each question, be sure to show all your work details for full credit.Round all numerical answers to three decimal places.
1. Research and define the concepts of maximum theoretical throughput, channel capacity, and bandwidth. Next, explain the difference between CAT5e and CAT6 Ethernet cables. What specific speeds can each of these cables handle? Listing credible cited resources, please answer these below.
2. In the table below, based on the first letter of your last name, pleas.
This document provides an overview of InfluxData's time series platform and its components. It discusses InfluxData's products and services including InfluxDB for time series data storage, Telegraf for collecting metrics, Chronograf for visualizing data, and Kapacitor for processing streaming data. It also provides examples of using the components and demonstrates their functionality.
This document discusses using TutorTIMS software to implement and generate Orthogonal Frequency Division Multiplexing (OFDM) signals. The objectives are to introduce OFDM, learn how to use TIMS modules to create an OFDM generator, and generate and analyze OFDM waveforms. Key TIMS modules used include the sequence generator, multiplier, encoder, phase shifter, adder, and tunable low-pass filter. OFDM works by splitting a data stream into multiple lower rate streams, modulating each onto a subcarrier, and recombining the signals.
Ecet 380 Enthusiastic Study / snaptutorial.comStephenson60
Antenna Design
Key Conclusions (technical):
Key Conclusions (critical thinking):
I.OBJECTIVES
Introduction to the most commonly used antenna types and significant design parameters
Design an antenna suitable for a 2G, 3G or 4G wireless application deployment.
fuzzy control in matlab fuzzy control in matlabDuyKhng31
This document summarizes a study on using interval type-2 fuzzy logic controllers (IT2 FLCs) for position control of a delta parallel robot. The researchers designed several IT2 FLCs by blurring the membership functions of an optimized type-1 FLC. They then evaluated the controllers' performance under different levels of uncertainty, including sensory noise and uncertain system parameters. The results showed that IT2 FLCs provided improved control performance compared to type-1 FLCs when the IT2 fuzzy sets were appropriately designed. However, excessive "type-2 fuzziness" in the design led to degraded performance. The study thus demonstrated the potential benefits of IT2 FLCs for handling uncertainty, but also the importance
The Role Of Software And Hardware As A Common Part Of The...Sheena Crouch
This document discusses the implementation of a software-defined networking (SDN) system using Field Programmable Gate Arrays (FPGAs). It describes an SDN switch core that can modify packet headers based on flow tables and forward packets to different ports. An SDN controller programmed the flow tables and monitored packet flows. Attacker nodes, implemented with a Microblaze processor, transmitted packets to the SDN switch network at programmable rates. The system allowed observation and testing of the SDN switches and network. Hardware and software implementations are discussed to realize the SDN system on FPGAs.
Abstract presentation on feature engineering on streaming data for pyconMayankPrasoon2
This document discusses using semantic graphs to optimize feature engineering for streaming timeseries data in industrial automation. It outlines challenges like high data volume, latency, and interdependence of streaming data. Semantic graphs can represent feature engineering computations as a directed graph, making relationships between concepts explicit. This allows for easier tracing of variables, code refactoring, and introducing parallelization by computing unrelated features simultaneously in separate threads. The document provides a simplified example of representing feature engineering equations as a semantic graph and how parallelization could be applied.
This document provides an overview of current ETL techniques from a big data perspective. It discusses the evolution of ETL from traditional batch-based techniques to near real-time and real-time approaches. However, existing real-time ETL approaches are inadequate to address the volume, velocity, and variety characteristics of data streams. The document also surveys available ETL tools and techniques for handling data streams, and concludes that the ETL process needs to be redefined to better address issues in processing dynamic data streams.
Industrial Ethernet Facts - The 5 major technologiesStephane Potier
This document provides a summary and comparison of five major industrial Ethernet communication systems: PROFINET, POWERLINK, EtherNet/IP, EtherCAT, and SERCOS III. It discusses how each system works, including approaches to real-time communication, organization structures, and criteria for evaluating long-term investment viability. Performance metrics like theoretically achievable cycle times and network load are also analyzed. The document aims to objectively assess the technical and economic characteristics of these industrial Ethernet standards.
This document summarizes a research paper that compares different multiplier designs. It discusses array multipliers, Booth multipliers using Radix-2 and Radix-4 algorithms. Simulation results on FPGA show that Booth multipliers consume less power and area than array multipliers. Specifically, a 16x16 Booth multiplier was found to be more efficient in terms of speed and power consumption compared to other designs. The document provides background on multipliers and power optimization in VLSI systems. It also outlines the methodology used to simulate and compare the multipliers in terms of time delays, gate counts and power usage.
A DevOps Tutorial to Set-up Intelligent Machine Learning Driven AlertsDevOps.com
In this webinar, Sebastien Leger, Founder, and CEO of LoudML, will show you how to use the LoudML machine learning API with your InfluxDB instance to quickly detect anomalies in your time series data that can trigger notifications in Slack or any of your favorite Incident management solutions like Pager Duty, OpsGenie, Victor Ops, or Alerta.
This webinar is organized in 4-parts: Basic setup running Docker, training your first-time series model (no programming needed!), building intelligent triggers and notifications, putting it all into practice as your solution easily detects abnormal data!
This document provides an overview of IT infrastructure components and evolution. It defines IT infrastructure as including all hardware, software, networks, data centers, and related equipment used to develop, test, operate, monitor, manage and support IT services. The key components are described as hardware, software, networks, and human resources. Five eras of IT infrastructure evolution are outlined from electronic accounting machines to the modern enterprise internet. Technology drivers of infrastructure evolution are also summarized, including Moore's Law, storage capacity growth, network effects, declining communication costs, and standards.
Intelligent Devices Delivering The PromisePeter Ashley
1. Intelligent Devices solved problems for NITTEC by making portable traffic signs compatible with their NTCIP system through upgrade modules. This allowed the signs' locations and messages to be monitored from the traffic management center.
2. Intelligent Devices provided interface modules to make Florida DOT's existing traffic signs and cameras compatible with their new NTCIP-based infrastructure as part of a statewide upgrade project.
3. Intelligent Devices provided a congestion monitoring system for a bridge in Tampa that detects backups using radar and warns drivers, helping to reduce rear-end collisions from slow or stopped traffic.
40.Classification of a network as a LAN or a WAN is not relevant to .pdffasttracksunglass
40.Classification of a network as a LAN or a WAN is not relevant to wireless communication
topologies.
1) True
2) False
44. Which of the following is NOT an example of a business term category that should be
addressed in the contract negotiations?
1) License term, renewal, termination
2) Risk-sharing terms
3) Upgrades and enhancements
4) Interfaces and conversions
45. Which of the following computing attributes is essential to conducting analytical research on
the scope of mapping the human genome?
1) High-speed computation capability
2) Robust data encryption
3) Wireless data transmission
4) Expert system interface
50.The market for financial applications software is BEST described as
1) entrepreneurial.
2) niche.
3) concentrated.
4) volatile.
52.Which of the following describes cellular digital packet data?
1) WAN architecture
2) Infrared technology
3) Used in Internet communication
4) Transmitted on a T1 line
57.What is network physical topology?
1) Configuration of computer and peripheral devices
2) Data transmission protocols
3) Network operating systems
4) Placement of router devices
68.Which of the following describes computer-assisted medical instrumentation?
1) Is used for both data capture and data display
2) Has limited application beyond routine monitoring, such as monitoring of vital signs
3) Is a passive technology used primarily in the clinical laboratory
4) Requires hard-wired configuration
71.Which of the following is an example of a utility program?
1) Virus scan software
2) Multiplexer
3) USB port
4) Digital voice translator
Solution
Please follow the data and description :
40) Wireless Network Topology :
It is a logical topology and shows how the computers connect and interact each other when there
is no physical connection, no cables connecting the computers. The computers communicate
each other directly, using the wireless devices. Wireless networks can have infrastructure or ad
hoc topology. So this does not deals with the types of communication networks that is either a
LAN or a WAN.
So the answer is OPTION 2 (False).
44) Contract negotiation :
It is the process of give and take policy that the parties go through to reach an agreement. Or, in
general as they often say in the terms of business, \"you don\'t get what you deserve; you get
what you negotiate.\" So this does involve with the process of risk management, license terms
and the renewal or the termination of it, any changes during the implementation or evne in the
execution process. But this does not go with the process of interfaces and conversion process.
So the answer is OPTION 4 (Interfaces and COnversions).
45) In the process of computing the mapping for the human genome there are some attributers
that needs to be fulfilled during the process so as to achieve the efficient resutls. Some of them
are the highly relaible data, fast computing, data mining, data warehousing and so on.
So the answer is OPTION 1 (High-speed computation capability).
50) Financial a.
A new dynamic speech encryption algorithm based on Lorenz chaotic map over in...IJECEIAES
This paper introduces a dynamic speech encryption algorithm based on Lorenz chaotic map over internet protocol to enhance the services of the real-time applications such as increases the security level and reduces latency. The proposed algorithm was divided into two processes: dynamic key generation process using 128-bit hash value to dynamically alter the initial secret keys, and encryption and decryption process using Lorenz system. In the proposed algorithm, the performance evaluation is carried out through efficient simulations and implementations and statistical analysis. In addition, the average time delay in the proposed algorithm and some of the existing algorithms such as AES is compared. The obtained results concluded that, the proposed dynamic speech encryption algorithm is effectually secured against various cryptanalysis attacks and has useful cryptographic properties such as confusion and diffusion for better voice communication in the voice applications field in the Internet.
Patton-Fuller Community Hospital has been providing medical care to the local community since 1975. As technology has advanced, the hospital now relies heavily on computer networks and digital systems. However, the hospital's current network infrastructure is outdated and in need of improvements to support modern medical equipment and ensure patient data security. Updating the network will require installing new wired and wireless networks, migrating systems to the cloud, and training staff on cybersecurity best practices. The goal is to implement a reliable and secure network to deliver high-quality care now and in the future.
This document discusses packet switching, layered models, and protocol suites used in computer networks. It describes the differences between circuit switching and packet switching. It explains how networking tasks are divided into layers, with each layer providing distinct functions. The document outlines the layers of the Internet protocol stack and OSI model, describing the responsibilities of each layer. It provides examples of common protocol suites like TCP/IP that ensure communication systems are complete and efficient.
The transport layer is responsible for host-to-host communication and provides services like reliable data transfer, flow control, and multiplexing. Common transport layer protocols include TCP and UDP, which provide connection-oriented and connectionless services respectively, and help applications exchange data over network layers. The transport layer sits above the network layer and below the application layer in network stacks.
An overlay network may be a electronic network thats designed on h.pdfannaindustries
An overlay network may be a electronic network that\'s designed on high of another network.
Nodes within the overlay network will be thought of as being connected by virtual or logical
links, every of that corresponds to a path, maybe through several physical links, within the
underlying network. for instance, distributed systems like peer-to-peer networks and client-server
applications ar overlay networks as a result of their nodes run on high of the net. the net was
originally designed as AN overlay upon the phonephone network, whereas these days (through
the appearance of VoIP), the phonephone network is {increasingly|progressively|more ANd
more} turning into an overlay network designed on high of the net.
Overlay networks ar utilized in telecommunication due to the supply of digital circuit change
instrumentality and glass fibre.[1] Telecommunication transport networks ANd informatics
networks (which combined form up the broader Internet) ar all overlaid with a minimum of an
glass fibre layer, a transport layer ANd an informatics OR gate change layers (in the case of the
PSTN).
Enterprise non-public networks were initial overlaid on telecommunication networks like frame
relay and Asynchronous Transfer Mode packet change infrastructures however migration from
these (now legacy) infrastructures to informatics based mostly MPLS networks and virtual non-
public networks started (2001~2002).
From a physical posture overlay networks ar quite advanced (see Figure 1) as they mix varied
logical layers that ar operated and designed by varied entities (businesses, universities,
government etc.) however they permit separation of considerations that over time allowable the
buildup of a broad set of services that would not are planned by one telecommunication operator
(ranging from broadband net access, voice informatics or IPTV, competitive medium operators
etc.).[2]
13
down vote
One reason we have a tendency to clock flip flops so there is not any chaos once the outputs of
flip flops area unit fed through some logic functions and back to their own inputs.
If a flip-flop\'s output is employed to calculate its input, it behooves North American nation to
possess orderly behavior: to stop the flip-flop\'s state from dynamical till the output (and
therefore the input) is stable.
This duration permits North American nation to create computers, that area unit state machines:
they need a current state, and calculate their next state supported the present state and a few
inputs.
For example, suppose we would like to create a machine that \"computes\" AN incrementing
four bit count from 0000 to 1111, and so wraps around to 0000 and keeps going. we are able to
do that by employing a four bit register (which may be a bank of 4 D flip-flops). The output of
the register is place through a combinatorial logic perform that adds one (a four bit adder) to
supply the incremented worth. This worth is then merely fed back to the register. Now, whenever
the clock edge arr.
DISCUSSION 1The Internet of Things (IoT) is based upon emerging .docxelinoraudley582231
DISCUSSION 1
The Internet of Things (IoT) is based upon emerging applications of technologies. A number of security researchers have expressed concerns that this blending of emerging technologies with the Internet will provide new opportunities for cyber criminals and terrorists.
Pick one of the emerging applications of technologies that belongs to the Internet of Things (see the readings and videos for this week) and research how it can be attacked by bad guys. Using the security criteria from the five pillars of information assurance, write a 250+ word analysis of the risks and vulnerabilities associated with your chosen technology. The five pillars are:
1 confidentiality
2 integrity
3 availability
4 authentication
5 nonrepudiation
Remember to cite your sources using APA format in-text citations and include an APA format reference list at the end of your response posting.
DISCUSSION 2
The International Telecommunications Union (2005) report described four categories of technologies which can be linked together to form an Internet of Things.
Enabling Technologies:
1. Tagging things: RFID and similar technologies
2. Feeling things: Sensor technologies
3. Thinking things: Smart technologies
4. Shrinking things: Nanotechnology
Choose one of these categories.
What are the leading technologies in your selected category?
How do they contribute to the Internet of Things?
What types of privacy or security concerns surround the technologies in your selected category?
Your initial posting should be 250+ words and be supported by citations and references in APA format.
Reference
International Telecommunications Union. (2005). ITU Internet reports 2005: The Internet of things [Executive Summary]. Retrieved from http://www.itu.int/osg/spu/publications/internetofthings/InternetofThings_summary.pdf
MATH233 Unit 1 Individual Project
To communicate most effectively, network administrators attempt to maximize bandwidth and throughput speeds to achieve high data transmission rates within the building’s CAT5e cables. These performance data transfer rates are given in multiples of unit bits per second (bps). In the table below, the school’s IT department estimates the network throughput that is required in the near future. The network manager has asked you to use these data to analyze the current traffic load and the network's overall capacity.
Estimates given in the table below suggest that your network throughput can transfer Gigabits (Gb) of data in seconds for each user.
For each question, be sure to show all your work details for full credit.Round all numerical answers to three decimal places.
1. Research and define the concepts of maximum theoretical throughput, channel capacity, and bandwidth. Next, explain the difference between CAT5e and CAT6 Ethernet cables. What specific speeds can each of these cables handle? Listing credible cited resources, please answer these below.
2. In the table below, based on the first letter of your last name, pleas.
This document provides an overview of InfluxData's time series platform and its components. It discusses InfluxData's products and services including InfluxDB for time series data storage, Telegraf for collecting metrics, Chronograf for visualizing data, and Kapacitor for processing streaming data. It also provides examples of using the components and demonstrates their functionality.
This document discusses using TutorTIMS software to implement and generate Orthogonal Frequency Division Multiplexing (OFDM) signals. The objectives are to introduce OFDM, learn how to use TIMS modules to create an OFDM generator, and generate and analyze OFDM waveforms. Key TIMS modules used include the sequence generator, multiplier, encoder, phase shifter, adder, and tunable low-pass filter. OFDM works by splitting a data stream into multiple lower rate streams, modulating each onto a subcarrier, and recombining the signals.
Ecet 380 Enthusiastic Study / snaptutorial.comStephenson60
Antenna Design
Key Conclusions (technical):
Key Conclusions (critical thinking):
I.OBJECTIVES
Introduction to the most commonly used antenna types and significant design parameters
Design an antenna suitable for a 2G, 3G or 4G wireless application deployment.
1. On the definition, design and implementation of an Integrated, Global,
Intelligent Capacity Agent for telecommunications and unified networks
based on quantification and qualification of information by an
elementary, transaction based model (convergence and unification of
Economics and Physics through an elementary definition of information:
applications to a process definition)
By
Abdul-Basit Khan
October 22nd, 2002
Additions and revisions, February 13th, 2005
Unification of Economics with Quantum Physics
Telecommunications and Information Technology convergence
Speculated relationship of Infoton* with Higgs Boson
Info-phone, information-based-billing system, infotonic switch
Impacts on telecommunications and IT industries
A new dimension in Information Economics or “Econo-Physics”
1
2. Abstract:
In this treatise, many interesting and revolutionary and evolutionary ideas have been
launched.
The research begins by showing a process model for the customer operations and
provisioning group of an Incumbent Local Exchange Carrier. This is a system being
modeled in terms of information flows, based on the quantum definition of information
presented later. In this process the correlation between time (and incremental changes in
time) with information (in terms of incremental changes in information) are described. In
system terms the effects of quantum values of information on entropy (disorder in the
system) and changes in entropy with time (as information flows) are modeled.
This model is the foundation of the product space for the new products introduced in the
second part of this paper.
It can be seen that this entire process of provisioning and information flows can be
optimized, if a quantum definition of information, defining quality and quantity of
information, and defining information as an elementary force and field, possibly equivalent
to the hypothetical Higgs field, where the Higgs Bosons are, in fact, the particle proposed in
this paper: Infoton. A detailed description of Higgs particles and Higgs force is presented in
Appendix 2. A strong correlation to the Infoton is described by the following link:
http://www.coimbra.lip.pt/atlas/higgsmec.htm
Process optimization, re-definition of information at a quantum level, relationship described
between entropy, time and information, all lead to three new products for converging
communication networks: (I) an intelligent unified capacity agent, which is an artificial
intelligence based expert system (consisting of several knowledge modules, specified below)
(ii) an information based/content based billing system (iii) an Infoton switch based on a
principle similar to the Heisenberg Uncertainty principle, and quantum symmetry and pairing
of elementary particles actually applicable to information retention and loss.
These products lead to another theoretical arena and a hypothetical proposition
(Appendix 1): Particle nature of information and wave nature of time. This proposition,
taking into account information symmetry and completeness and correlation with time,
directly provides a new evolutionary perspective for Economics impacting: Price theory.
Game theory, arbitrage and negotiation economics (Nash/Cournot equilibriums),
bargaining under uncertainty and dynamic games.
2
3. PROCESS MODEL
This process model describes a basic application of the information definition later
delineated. It attempts to model the provisioning process in a CLEC/ILEC environment
and how capacity constraints come into play in the provisioning process. An intelligent
system is defined and depicted to meet such capacity constraints in network and services
planning in a CLEC or ILEC similar to AT&T, Verizon or Sprint. Later, a simple realization
of the Intelligent Capacity Agent based on the currently available Information Warehouse
(IW) is described.
The information definition described below can be used for designing Next Generation
wireless networks, optical systems and quantum databases. Mathematical proofs are currently
being sought for such an elementary definition of information. This definition of
information can be utilized in designing organizational decision support systems and
network management and control systems.
MODEL & PARAMETER DEFINITION:
The various parameters used in the models described below are defined as following for the
reader to get a better understanding of the optimization criteria and solutions being sought.
The relationships between time, information, entropy and quantum uncertainty in event
perception are applied in this model. Time and information are quantified and qualified as
physically realizable entities. It is assumed that the reader has the general idea of the TELCO
provisioning and order entry processes, for which these time/information relationship is
demonstrated, from a Systems perspective.
Order (Telco customer orders):
1. delta Inf4, delta T4: I4,T4
2. delta Inf1, delta T1: I1,T1
3. delta Inf2, delta T2: I2,T2
Critical I2,T2
4. delta Inf3, delta T3: I3,T3
5. delta Inf5, delta T5: I5,T5
delta Inf: I , Critical Information Qualified and Quantified.
TD = Delivery of Service to Customer (Total Time of Delivery, from sales initiation
to service delivery)
TD = F(T1,T2,T3,T4,T5) = F(ALFA)
TD = F(I1,I2,I3,I4,I5) = F(BETA)
QI = F(I1, I2, I3, I4, I5) = F (Gamma)
QI = Quality of Information
Higher the delta I, superior the quality of information
Process Optimization boils down to: Minimize TD = Min F(ALFA) & Max F
(GAMMA)
3
4. T1 is Proportional to I1 is Proportional to P(ENGINEERING WORKS REQUEST),
where P(ENGINEERING WORKS REQUEST) = Time for processing of
ENGINEERING WORKS REQUEST (requests for equipment
build/augmentation/removal)
F(GAMMA) = Subjective to Personnel in each division, the information processing
and the hierarchy
Oc (T2) = Orders completed per designer is Proportional to
F(T1,T2,T3,T4,T5) + F(I1,I2,I3,I4,I5)
TD is Proportional to delta T3 + delta T2 + nxdelta T1 +
mxdelta T5 + pxdelta T4
n, m, p is the number of times information is exchanged
EVALUATION OF PARAMETERS AND CORRELATIONS
TRACKING AVERAGE PROCESSING TIME FOR ENGINEERING WORKS
REQUEST
TRACKING FEEDBACK TIME FROM TECHS/INSTALLERS
SALES: REDESIGN REQUESTS, VARIATIONS ON THE ORIGINAL DESIGN
SPECIFICATIONS
Tracking time delay *, **
4
5. delta Inf3,
delta T3
delta Inf5,
delta T5
Order Managers
After Completion of
design *
delta T1' (Reactive Capacity Growth) >
delta T1" (Futuristic Capacity Growth)
Capacity
delta Inf2, Network
delta T2
Engineering
Sales
delta Inf4,
delta T4
delta Inf1,
deltaT1
After completion of
design **
Design
5
6. Dynamic update of information content of the two systems depicted in the diagrams
above: "Delta I", is proposed.
If TD is irrelevant to design, then first design (not re-design):
I2,T2 = f((I4,T4),(I1,T1),(I5,T5))
I5 can come before I2 (if pre-design information is sought)
I, Tools ----IBIS, FARS/NEAD, INM----different stages
Intelligent Capacity Agent Ca---dynamically updated
Intelligent Order Agent Oa---dynamically updated
Ca Availability to designer minimizes T1, maximizes I1
Oa Availability to designer minimizes T4, maximizes I4
Ca-----Engineering Work Requests, Updated capacity information from actual
Physical Status of Equipment, Predictive Capacity Growth depending on traffic
forecast, node utilization (equipment utilization)
Oa-----Intelligent update of order/requirement information
Ca: INTELLIGENT CAPACITY AGENT (EXPERT SYSTEM)
Expert System/DP System with dynamic information inputs
Design specifications of an Intelligent Capacity Agent are described below, after a
brief description of the two possible products: “Infophone” and “Information based
billing system”, realizable in this product space.
The Info-Phone
Abstract:
This is a quantum device based on the generation, symmetry, detection and inherent
particle nature of information, which can be attributed to elementary “Infotons”, in fields
parallel to the Higgs field (possibly the Higgs field itself). Spontaneous generation,
symmetry and pairing of these particles, with ongoing elementary transactions, defines
the generation, transmission and detection methodology for these elementary (possibly
Higgs particles).
The general idea behind the proposed development of an info-phone, a next generation personal
communications device is described below in bullets:
Based on the "infoton" transfer mechanism.
Development of Information Sensors that detect "Infotons" and attribute values/energies to
them.
Development of a new device: Infophone
Info-Phone would utilize a separate billing system based on "infotons" transferred.
"Infoton" transfer mechanism would not require the normal telecom transmission media.
Info-Phone would be based on elementary "infotons", which would be transferred by
selective “ Information Windows”.
Protocol independence, going beyond ATM, MPLS and cell switching.
6
7. The information based matrix switch utilizing the Infoton generation/
transmission/detection mechanism (The switch could be housed on a sub-
atomic quantum chip (quantum processor) within the Info-Phone, where
elementary particle generation and pairing (in probability windows of time)
would activate the particular parts of the matrix
Id = Ic
Id Ic
Ib Ia
Ib = Ia
Switching Matrix of Infoton/Higgs Boson
the info-Phone: Even Symmetry (Even
generator /transmitter/
Infoton generated/ number of point-masses/ detectors
generators/detectors ) in the
detected—Status: switch (on either side of the
Lost/Retained partition), so events are
symmetrical on each
corresponding point-mass on Basit’s Uncertainty Principle: Either
the sub-atomic switch fabric. the time of occurrence of an event or
Information is either lost or
retained (Infoton exists or complete and symmetrical
decays)-PHYSICS information about an event can be
Generation of information known
produces competition, effects
price level etc- Economics
7
9. ASSUMPTIONS OF THE INFORMATION MODEL.
-INFORMATION SUB-PARTICLES: "INFOTONS"---AT A QUANTUM LEVEL
-1 INFOTON HAS A CERTAIN UNIT PRICE ASSOCIATED WITH IT
-DELTA I= N UMBER OF “INFOTONS”
N
-OPTIMAL BILLING S YSTEM IS ABLE TO EVALUATE DELTA I
-THE PRICE/BILL TO THE SUBSCRIBER IS A FUNCTION OF NUMBERS OF I NFOTONS AND THE
PRICE ASSOCIATED WITH EACH “ INFOTON ”.
PROBLEMS ASSOCIATED WITH THE MODEL :
CALCULATION OF THE “ INFOTONS” TRANSFERRED IN A GIVEN TRANSACTION
OPTIMAL BILLING SYSTEM SHOULD BE ABLE TO CALCULATE THE NUMBER OF INFOTONS
TRANSFERRED
A PRICING FORMULA FOR THE “INFOTONS ” MUST BE ESTABLISHED
VOICE, VIDEO , DATA CAN ALL BE BROKEN DOWN INTO THE NUMBER OF INFOTONS
TRANSFERRED
WORK TO BE DONE IN BUILDING THE THEORY.
-QUANTIFICATION OF INFORMATION AS ELEMENTARY PARTICLES
-ANALYZING VOICE /VIDEO /DATA TRAFFIC IN TERMS OF THE ELEMENTARY INFORMATION
P ARTICLES
-CAPABILITY OF THE OPTIMAL BILLING S YSTEM TO EVALUATE THE NUMBER OF
ELEMENTARY PARTICLES TRANSFERRED IN A GIVEN INTERACTION THAT COULD BE A VOICE
CALL , DATA TRANSFER, VIDEO TRANSFER ETC .
TRANSACTION BASED SYSTEMS AT THE QUANTUM LEVEL, WHERE INFORMATION TRANSFER IS
ACCURATELY QUANTIFIED AND QUALIFIED BASED ON THE CONTENT AND ATTRIBUTES OF
INFORMATION. A MAJOR APPLICATION OF SUCH A TRANSACTION BASED SYSTEM IS TO
CAPACITY PROBLEMS FACED BY TELCOS.
ONE APPLICATION IS THE P REDICTIVE C A: INTELLIGENT CAPACITY AGENT (IDEA
PRESENTED EARLIER). IT IS A TRANSACTION BASED DATA PROCESSING INTELLIGENT SYSTEM,
WHERE INFORMATION INPUTS HAVE TO BE QUANTIFIED AND QUALIFIED FOR TRANSACTIONS
BETWEEN THE DIFFERENT KNOWLEDGE MODULES . VARIOUS MATHEMATICAL TECHNIQUES
OF QUANTIFYING INFORMATION BUT NOT FOR QUALIFYING INFORMATION ARE AVAILABLE.
INFORMATION QUALIFICATION IS SUBJECTIVE AT PRESENT , PHYSICAL MODELS TO QUALIFY
INFORMATION ARE REQUIRED, WHICH CAN BE INCORPORATED INTO DEVICES LIKE THE
INFO -PHONE ( ABOVE).
CAPACITY FORECAST: F ORECASTING K NOWLEDGE MODULES WITH A STATISTICAL
INFERENCE ENGINE BASED ON:
REGIONAL DATA: NPA NXX
TYPES OF BUSINESSES—NEW SET-UPS—CONSTRUCTION OF NEW OFFICES/BUILDINGS. THIS
DATA FOR EACH NPA-NXX WILL GO INTO A KNOWLEDGE MODULE: MARKET D ATA
MODULE.
EQUIPMENT UTILIZATION OVER TIME AND CIRCUIT DATA—NEW INSTALLATIONS,
CANCELLATIONS ---USE DATA TO FORECAST CAPACITY GROWTH /CONTRACTION
9
10. EQUIPMENT DATA FROM LIVE SYSTEMS —CIRCUIT DATA
THIS WILL FORM TWO MODULES : EQUIPMENT DATA MODULE AND C IRCUIT D ATA
MODULE. TO BE DYNAMICALLY UPDATED, FOR THE INFERENCE RULES BASED INFERENCE
ENGINE.
TRAFFIC GROWTH OVER THE NETWORK OVER TIME OVER AN NPA-NXX. TRAFFIC
FORECAST FOR A FUTURE PERIOD OF TIME. THIS PREDICTIVE MODULE WOULD BE THE
TRAFFIC ANALYSIS DATA MODULE .
USER INTERFACE OF THE INTELLIGENT CAPACITY AGENT:
DATA INPUT/UPDATE INTERFACES FOR DIFFERENT MODULES, PROVIDING INPUTS TO THE
LEARNING/INFERENCE ENGINES.
CAPACILTY ANALYSIS INTERFACE:
USER SPECIFIES DH TYPE AND END- POINTS OF THE SERVICE REQUESTED----OUTPUT IS LOW
SPEED AND HIGH SPEED BANDWIDTH COMPONENTS AVAILABLE ON THE SYSTEMS, IN TERMS
OF PORTS, BETWEEN THE TWO END- POINTS. ONLY END -POINTS AND DH -TYPE HAS TO BE
SPECIFIED BY THE USER.
F OR THE INTELLIGENT CAPACITY AGENT, CODE OPTIMIZATION IS ENTROPIC , MINIMIZING
POSITIVE CHANGES IN ENTROPY.
AN EARLY PRACTICAL REALIZATION OF THE INTELLIGENT CAPACITY AGENT CAN BE
THROUGH THE INFORMATION WAREHOUSE.
THE I NFORMATION WAREHOUSE (IW) CAN BE USED AS A PRELIMINARY CAPACITY
ANALYSIS SYSTEM PROVIDING REALTIME AVAILABILITY STATUS OF THE PORTS OF THE
NETWORK COMPONENTS REQUIRED FOR DELIVERING THE SERVICE, AS WELL AS
OPTIMAL PATH , BASED ON RULES INCORPORATED AS QUALIFIERS, CAN BE DETERMINED
THROUGH THE NETWORK DATA MODEL: THE VARIOUS NETWORK MANAGEMENT TOOLS
WHICH GIVE THE LIVE STATUS OF PORTS IN THE ALLSTREAM NETWORK (LUCENT PORTS ,
F UJITSU PORTS, NORTEL PORTS, TITAN PORTS ) CAN BE INTERFACED IN REAL-TIME WITH THE
IW. DATA CAN BE DUMPED TO THE IW FROM THESE NETWROK MANAGEMNT SYSTEMS IN
REAL- TIME OR REAL-TIME INTERFACES CAN BE DEVELOPED . ALL THE PORT DATA CAN BE
CONVERTED AND UPDATED INTO DATA OBJECTS /TABLES, WHERE NUMBER OF PORTS IS A
KEY IN THESE DATA OBJECTS. THE RELATIONSHIPS BETWEEN THESE DATA OBJECTS, WHICH
MODEL THE POINT TO POINT NETWORK , WOULD BE THE TYPE OF SERVICE (DS-1/DS-3/OC-
3/OC-12/ATM). T HE QUALIFIERS WOULD BE PROTECTION STATUS, DIVERSITY AND OTHER
PARAMETERS DEFINING THE PATH THROUGH THE NETWORK.
10
11. APPENDIX 1:
Speculated relationship between information, events and time:
It can be predicted that Particle nature of information is related to wave
nature of time. When an Event occurs in time, information is generated
before, at the time of, and after the event. There is an uncertainty principle
which comes into play at the time of occurrence of event. Either the time of
occurrence or symmetrical information about the event can be known. Both
quantities cannot be known at the same time. Information has two qualities:
Symmetrical or Complete information. Asymmetric or incomplete
information. Information has two states: either it is retained or released in
the case of an event. At the time of occurrence of an event, symmetrical
information is generated, but transmission and reception techniques render it
asymmetric. Any event generates infotons*, which increases the entropy in
the universe around the event.
11
12. APPENDIX 2:
INFOTONS? HIGGS BOSSONS?
INFORMATION? HIGGS FIELD?
IS THE INFOTON AND HIGGS PARTICLE MANIFESTATION OF THE SAME
PHENOMENA, TWO FACES OF THE SAME COIN?
http://pdg.lbl.gov/atlas/etours_physics/etours_physics10.html
Does Economics lead to the same results as theoretical physics?
Information below is being quoted from:
“http://www.openquestions.com/oq-ph008.htm”
“We say "fortunately", because Higgs theory makes certain predictions which are still not
verified experimentally -- the primary example of which is the existence of (at least) one
massive spin 0 boson (i. e. a "scalar" boson) that has not yet been observed, despite
intensive experimental searches -- the Higgs particle.”
“The Higgs mechanism
Let's review where we stand so far.
We have a nice, well-behaved (i. e., mathematically consistent, renormalizable)
Yang-Mills gauge theory of the electromagnetic force, based on U(1) gauge
symmetry.
We would like to have an equally nice Yang-Mills gauge theory of the weak
force, and it should be based on a SU(2) symmetry.
Experimentally, it is known that the particles which mediate the weak force are
massive, instead of massless as required in a Yang-Mills theory.
The electromagnetic and weak forces are intertwined, because the weak SU(2)
symmetry exchanges particles that have different amounts of electric charge.
Yet any potential symmetry between electromagnetic and weak forces can't be
exact, since the forces have different strengths.
A series of profound insights by Sheldon Glashow, Steven Weinberg, and Abdus Salam,
mostly as independent contributions, led to the unified theory of the electroweak force.
This was accomplished by taking the above givens, making a few inspired assumptions,
and synthesizing everything in a new -- and quite effective -- way.
The insights were as follows:
12
13. 1. Most of the theoretical difficulties result from the existence of nonzero rest
masses of the various particles. The masses break the symmetry between electrons
and neutrinos (and other particle pairs), they are incompatible with a
straightforward Yang-Mills gauge theory, and they are the root of the problems
with renormalizability.
2. At very high energies, the energy contributed by a particle's rest mass becomes
insignificant compared to the total energy. So at sufficiently high energy,
assuming a particle rest mass of zero is a very good approximation.
3. A consistent, unified Yang-Mills theory of electromagnetism and the weak force
can be formulated for the very high energy situation where particle rest masses are
effectively zero.
4. At "low" energies (including almost all levels of energy which are actually
accessible to experiment), the symmetries of the high energy theory are broken,
and at the same time most particles acquire a nonzero rest mass. These two
"problems" appear simultaneously when symmetry is lost at low energy, much as
symmetry is lost when matter changes state from a gas to a liquid to a solid at low
temperature.
The "Higgs mechanism" is basically nothing more than a means of making all of this
mathematically precise.
The key ingredient not yet specified is to assume there is a new quantum field -- the
Higgs field -- and a corresponding quantum of the field -- the Higgs particle. (Actually,
there could be more than one field/particle combination, but for the purposes of
exposition, one will suffice.) The Higgs particle must have spin 0, so that its interaction
with other particles does not depend on direction. (If the Higgs particle had a non-zero
spin, its field would be a vector field which has a particular direction at each point. Since
the Higgs particle generates the mass of all other particles that couple to it, their mass
would depend on their orientation with respect to the Higgs field.) Hence the Higgs
particle is a boson, a "scalar" boson, since having spin 0 means that it behaves like a
scalar under Lorentz transformations.
The Higgs field must have a rather unusual (but not impossible) property. Namely, the
lowest energy state of the field does not occur when the field itself has a value of zero,
but when the field has some nonzero value. Think of the graph of energy vs. field
strength has having the shape of a "W". There is an energy peak when the strength is 0,
while the actual minimum energy (the y-coordinate) occurs at some nonzero point on the
x-axis. The value of the field at which the minimum occurs is said to be its "vacuum"
value, because the physical vacuum is defined as the state of lowest energy.
This trick wasn't created out of thin air just for particle theory. It was actually suggested
by similar circumstances in the theory of superconductivity. In that case, spinless
particles that form a "Bose condensate" also figure prominently.
The next step is to add the Higgs field to the equations describing the electromagnetic and
weak fields. At this point, all particles involved are assumed to have zero rest mass, so a
13
14. proper Yang-Mills theory can be developed for the symmetry group U(1)xSU(2) that
incorporates both the electromagnetic and weak symmetries. The equations are invariant
under the symmetry group, so all is well.
Right at this point, you redefine the Higgs field so that it does attain its vacuum value (i.
e., its minimum energy) when the (redefined) field is 0. This redefinition, at one fell
swoop, has the following results: the gauge symmetry is broken, the Higgs particle
acquires a nonzero mass, and most of the other particles covered by the theory do too.
And all this is precisely what is required for consistency with what is actually observed.
In fact, the tricky part is to ensure that the photon, the quantum of the electromagnetic
force, remains massless, since that is what is in fact observed. It turns out that this can be
arranged. In fact, the photon turns out to be a mixture of a weak force boson and a
massive electromagnetic boson that falls out of the theory. The exact proportion of these
two bosons that have to be mixed to yield a photon is given by a mysterious parameter
called the "electroweak mixing angle". It's mysterious, since the theory doesn't specify
what it needs to be, but it can be measured experimentally.
So, the Higgs mechanism is a clever mathematical trick applied to a theory which starts
by assuming all particles have zero rest mass. This is especially an issue for the bosons
which mediate the electroweak force, since a Yang-Mills theory wants such bosons to be
massless. While the photon is massless, the W and Z particles definitely aren't. Where,
then does their mass come from? Recall that we observed that spin-1 bosons have 3
"degrees of freedom" if they are massive, while only 2 otherwise. It turns out that this
extra degree of freedom comes from combining the massless boson with a massive spin-0
Higgs boson. That Higgs boson provides both the mass for the W and Z, as well as the
extra degree of freedom.
In fact, the mechanism furnishes mass to all particles which have a nonzero rest mass.
This occurs because all the fermions -- quarks as well as leptons -- feel the weak force
and are permuted by the SU(2) symmetry. And since quarks acquire mass this way, so too
do hadrons composed of quarks, such as protons and neutrons, which compose ordinary
matter as we know it.
But this mechanism is more than just a trick. If the whole theory is valid, then the Higgs
boson (or possibly more than one), must be a real, observable particle with a nonzero
mass of its own. This is why the search for the Higgs boson has become the top priority
in experimental particle physics.
What about renormalizability? Has this been achieved in spite of all the machinations? It
seemed plausible that the answer was "yes", which was of course the intention, since the
high-energy form of the theory has the proper gauge symmetry. But it took several years
until a proper proof could be supplied, in 1971, by Gerard 't Hooft. “
14
15. “Supersymmetry
It should be pretty clear by now that Higgs physics is very much tied in to the standard
model. Indeed, it's necessary in some form to make sense of many features of the
standard model -- such as electroweak symmetry breaking and particle masses. In fact, it
-- or something very like it -- seems to be necessary just to make the theory consistent.
And yet it's not quite a part of the standard model either. It has a bit of an ad hoc feel to it.
If, in fact, the Higgs mechanism exists in more or less the form outlined here, then the
standard model certainly has no explanation for why it's there, for what makes it happen.
We shall want more than that. We want to know the source of the Higgs physics itself.
There may be a number of ways to do that (which might be related among themselves).
But there is one body of theory which can provide exactly the explanation of Higgs
physics we're looking for, and which has been in gestation since the early 1970s (i. e.,
since the time the standard model assumed its present form). It's called supersymmetry.
We'll discuss it in much more detail elsewhere. All we need to say about it here can be
put very simply. The essential idea is to postulate one more symmetry, but of a radical
sort. This new symmetry relates bosons (particles with integral spin) to fermions
(particles with half-integral spin). The symmetry associates to each fermion and boson a
particle of the opposite type, known as its "superpartner". The equations of the theory are
set up so that they remain true when a symmetry operation exchanges any fermion or
boson with its superpartner. This is a radical step, because none of the postulated
superpartners can be identfied with any known particle, so the theory immediately
doubles the number of particles which must exist. Even the Higgs boson has a
supersymmetric parther, the higgsino fermion.
One justification for taking such a radical step is this: When the mathematics of
supersymmetry is worked through, it turns out that the whole Higgs physics -- the Higgs
field, the Higgs boson(s), and the Higgs mechanism -- falls out as a necessary
consequence. This is great for Higgs physics, if in fact supersymmetry is a correct theory.
But the other side of the coin is that if the Higgs physics can't be verified experimentally,
then supersymmetry can't be correct. This is yet another reason why Higgs physics is of
such urgent concern to particle physicists.
The fact alone that the Higgs physics is a mathematical consequence of supersymmetry is
quite striking. It doesn't seem likely to be just a concidence. Further, the discovery of any
supersymmetric particles would validate the theory of supersymmetry, and thereby
validate the Higgs physics also. On the other hand, the Higgs mechanism could still exist
even if supersymmetry doesn't exist in nature. But it would have serious problems, such
as the "hierarchy problem", and the lack of any obvious source or cause of the Higgs
field.
If supersymmetry is correct, then, so is the Higgs mechanism. And in fact, there are more
detailed predictions. Most notably, there will be not one Higgs boson, but several, each
with a different mass. All of the "extra" Higgs bosons could be quite a bit heavier than
the lightest one which is needed by the standard model. In particular, they might be so
heavy that they would not be detected soon, if at all. There are additional details
15
16. predictable by supersymmetry which further constrain the mass of the lightest Higgs
boson beyond what we might guess from the standard model alone.
If supersymmetric particles are detected before the Higgs boson, than will be
confirmation of supersymmetry, so the Higgs particle must show up eventually as well.
But what about the converse? Suppose the Higgs boson is detected first. Will that be
evidence for supersymmetry? Yes, probably.
The reason lies in what we have alluded to, namely that the Higgs physics by itself leaves
something to be desired, as long as it is an ad hoc addition to the standard model. We
really want to have a good explanation for the physics itself. Supersymmetry provides
this. It automatically contains fields which behave as a Higgs field should, and hence
entails the existence of Higgs bosons. It also says something about how standard model
particles interact with these fields, which elucidates the mechanism.
A Higgs mechanism without supersymmetry would also introduce what is known as the
hierarchy problem. This problem arises if (as seems likely) the strong and electroweak
forces are unified just as the electromagnetic and weak forces are -- but at a much higher
16
energy scale -- around 10 GeV. The problem is to explain how this can be so much
higher than the electroweak unification scale of 100 GeV, or, alternatively, how the latter
scale, and the masses of the W and Z bosons, can be so small.
In short, if Higgs bosons are observed, we will have evidence for supersymmetry, as that
is the only theory we know of that makes good sense of Higgs physics.
More detail on supersymmetry
Where does the Higgs field come from?
OK. It's all well and good to say that mass comes from the Higgs field. But where does
that come from? What is it exactly? Why is it there?
The Higgs field, in some sense, answers the question of where mass comes from. But that
merely shifts the question of explaining mass to that of explaining the Higgs field.
This is still an open question, but there are some plausible answers, of different sorts.
There is, first of all, purely a mathematical and theoretical answer. It so happens that
there is a theorem, called Goldstone's theorem, after Jeffrey Goldstone, who came up
with it around 1960. The theorem says that when a continuous global symmetry is
spontaneously broken, there must exist a massless spin-0 boson. The particle is called
(generically) a Goldstone boson. Unfortunately, such a particle has never been detected.
Something's fishy.
Oddly enough, there is also this puzzle regarding a massless spin-1 boson which Yang-
Mills theory requires in order to carry a gauge force. Physicists were going crazy because
that could not be found either, for the weak force. They spent a lot of time trying to get
around the apparent requirement for both of these non-existent particles in the theory of
the weak force.
16
17. Eventually it was realized that there was a way to combine the two inadequate answers
mathematically in order to concoct an answer that worked. This is basically what
Weinberg and Salam did in coming up with the theory of the electroweak force. They
found that by adding yet another particle -- the Higgs -- they could make the Goldstone
boson disappear and make the electroweak bosons massive. The electroweak bosons are
said to "eat" the Goldstone boson and thereby put on weight. In the presence of the Higgs
field, the Goldstone boson, in effect, becomes the third polarization state of a gauge
boson. (Recall that massless spin-1 bosons have only two polarization states or degrees of
freedom.)
There is a second type of theoretical way to explain the Higgs mechanism. Recall that a
basic postulate about the Higgs field was that when the energy of the field is plotted
against the strength of the field, the resulting graph has a W shape. The simplest
mathematical curve with such a shape is a fourth degree polynomial of the form
E = x4 + Bx2, where E is energy and x is field strength. (E is plotted on the y-axis.) If B is
negative, then for values of x close to 0 (but not exactly 0), E will be negative. Hence for
such values, you actually get a lower energy with a non-zero field.
Now, in the standard model, all this just needs to be taken as a given. But it turns out that
in theories with supersymmetry, it is actually possible to compute how the coefficient B
in this equation behaves as a function of temperature. It is found that at high temperatures
(say, corresponding to an energy of 1000 GeV), B is positive. The polynomial expression
for E in that case has just a single minimum value (of 0) when the field strength is 0. On
the other hand, at lower temperatures (such as what we have in the universe at present), B
is negative. In that case, there are two minima of the polynomial for E, at nonzero value
of the field strength, which is just what we need.
This mathematical behavior reflects exactly what is required to have a nonzero Higgs
field appear "from nowhere" at relatively low temperatures. That is, the field doesn't exist
at high temperatures, because minimizing energy requires it to not exist. Yet at lower
temperatures it does exist, because in the changed circumstances, that is what yields a
minimum energy.
This puzzling behavior becomes much more plausible by analogy with a number of other
physical phenomena. All of these involve a change of state, a "phase transition", in matter
when the temperature of the system changes. Among the many examples are:
A magnetized piece of iron retains its magnetism up to a temperature of about
768° C but loses it above that point. Upon cooling below that point, the magnetic
field reappears.
A number of materials have the property of superconductivity at very low
temperatures, but lose this property at a few tens of degrees above absolute zero.
A crystal has a small number of distinct symmetry axes at low temperature, but
loses these axes, and becomes more symmetrical, when the temperature is high
enough to melt the crystal. Water, in the form of an ice crystal or snowflake is a
perfect example.
17
18. What all these examples have in common is that a piece of matter exhibits a higher
amount of symmetry at higher temperatures. In addition, this phase transition occurs at a
definite point. Finally, the higher symmetry is lost if the matter is cooled below the
critical point. This phenomenon is so familar we have various names for it (in different
contexts), such as "precipitation" (e. g. rain), condensation, crystallization, etc.
This is precisely what happens with the Higgs field. It is "really" there all along.
However, at high temperatures the equations governing the field are such that it does not
affect matter. As the temperature decreases, at some critical point the equations change
and the field condenses into a new state where it does affect matter. It suddenly causes
matter to have mass, because under the new equations the overall system has lower
energy when matter has mass than when it does not.
This new state at lower temperature also corresponds to the breaking of previous
symmetry -- which is exactly what the Higgs mechanism is supposed to do. In fact, the
mechanism was, originally, consciously invented to account for the breaking of symmetry
which explains the phenomenon of superconductivity, as we mentioned earlier.
Searching for Higgs bosons
Why has it been so difficult to find the Higgs particle experimentally? The answer is that
it must be fairly massive, so that very high energy particle accelerators are required for
the search.
Well, then, how massive is it? The answer is: the expected mass isn't very well
constrained by the theory, which makes the search even harder. It becomes necessary to
search systematically at every possible energy level, which becomes all the more tedious
since the searches must be done at the limits of current accelerator capability.
Fortunately, there are upper limits on the possible mass, given reasonable assumptions.
The standard model itself and existing experimental results imply that the upper limit on
a Higgs particle mass is about 8 times that of the Z boson. Since that is about 91.2 GeV,
the upper limit on the Higgs is around 700 GeV. Under some plausible further
assumptions, the limit can be lowered to around 3 times the mass of a Z, or about 270
GeV.
Experimental results already obtained place further limits on the expected mass of a
Higgs boson. The way this works is to assume some particular value for this mass and
derive various experimental consequences from that. Then consider experimental results
actually obtained. If you look at what the mass needs to be in order to agree with all the
results simultaneously, you find that the mass of the Higgs can't be more than about 2
times the mass of a Z, or about 180 GeV.
In the best case, if the simplest form of supersymmetry is correct, the limit must be even
lower, perhaps about 1.5 times the mass of the Z, or 135 GeV. Although there may be
more than one Higgs boson in a supersymmetric theory, this limit can be derived for the
lightest Higgs boson. (There aren't similar constraints on the heavier Higgs bosons.)
18
19. Even if a more complicated supersymmetric model is required to describe the real world
(because there are additional interactions and particles and forces), it appears the mass
limit on the lightest Higgs is still no more than 2 times the Z mass.
The very latest experimental results rule out any Higgs particles up to a mass of about
115 GeV, so there is actually rather little range left to search. Perhaps only to 135 GeV,
or 180 GeV at most.
We should expect some answers pretty soon.
What sort of evidence is being sought in order to detect Higgs bosons? Explaining this
gives a good illustration of how experimental particle physics works. To begin with,
theory says the Higgs particles must decay into particle-antiparticle fermion pairs. Any
supersymmetric particles, as well as the top quark (at about 155 GeV) would be too
heavy.
Further, since the Higgs generates the mass of other particles by its interaction with them,
theory says its probability of interaction is proportional to the mass. Thus the probability
of decaying into any particular (allowable) particle-antiparticle pair is in proportion to the
particle mass. The next three heaviest standard model fermions are the bottom (or b)
quark, the tau lepton, and the charm quark. All other fermions are much lighter. The
bottom quark is the heaviest, so most of the time a Higgs will decay into b and anti-b
pairs. Therefore, experiments seeking to detect the Higgs will look for events that
generate mostly b, tau, and charm pairs in the appropriate ratios.
There are only three accelerators in the world which could in principle detect a Higgs
boson. Two are at CERN in Geneva. The first of these is the Large Electron Positron
Collider (LEP), which has already been decommissioned to make room for the second,
the Large Hadron Collider (LHC), which won't be ready to work before 2005 (or later).
Just before the LEP was shut down late in 2000 there were hints that Higgs particles
might have been detected. Subsequent analysis of the data indicated that this was a false
alarm.
That leaves only the Tevatron at Fermilab in Illinois. A good deal of time at that facility
is now devoted to searching for the Higgs boson. If it is a real particle, it ought to be
detected very soon -- given that experiments are quickly reaching the upper limit of the
plausible mass range. By 2006 a large number of Higgs events should have been
observed (again supposing the particle exists). This will permit even low probability
decay modes to be studied and should produce enough information to discriminate among
possible theoretical alternatives.
Related issues
Higgs physics may seem like an esoteric issue. Except for fairly superficial references to
the search for Higgs particles and occasionally an allusion to the role that the Higgs field
plays in explaining the source of particle mass, the subject is rarely discussed in
publications intended for a general audience. While it's hard to disagree that the origin of
19
20. mass is an important issue, the concerns about the mechanism of symmetry breaking and
renormalizability might seem to be merely technical details only physicists worry about.
And yet it turns out that Higgs physics is involved in an astonishing -- almost an alarming
-- number of aspects of frontier questions of physics and (especially) cosmology. In
addition to the various topics touched on already, here are a goodly number of others.
Grand unified theories and the hierarchy problem
Following the succesful unification of the electomagnetic and weak forces in the
electoweak theory around 1970, there was much enthusiasm to seek a similar unification
of the electroweak and strong forces in a similar sort of Yang-Mills theory, called a
"grand unified theory" (GUT), We discuss this in more detail elsewhere, but a central part
of any such effort is the introduction of additional Higgs fields to account for the
spontaneous breaking of the symmetry of this (hypothetical) unified theory.
Suffice it to say that, for a variety of reasons, the search for a GUT has not yet proven
successful. One of the problems is related to the vast difference in the energy levels that
would be involved. If there were such a unification of the electroweak and strong forces,
15
it would be manifest only at extremely high energies -- at least 10 GeV. In contrast, the
breaking of the electoweak symmetry occurs around 100 GeV.
This is a difference of a factor of at least 10 13. There would have to exist many new
bosons analogous to the photon, W, Z, and gluons. These bosons are collectively called X
bosons, and they would have masses at least 1015 GeV. The Higgs particles to account for
such massive bosons would need to be of a similar mass.
It is theoretically difficult to understand how there could be such a huge mass difference
between the lightest Higgs particle(s) which occur in the electroweak theory and these
other hypothetical particles. This is an aspect of what is known as the "hierarchy
problem". It is especially acute for Higgs particles, because they are scalar bosons, which
reflect relationships between different energy scales. In particular, the masses of such
bosons are related by equations whose parameters would require extreme "fine tuning" to
account for particles of such vastly different masses. This problem can be handled if the
theory of supersymmetry is correct.
Inflationary cosmology
As we noted above, systems of matter and energy tend to undergo what are called phase
transitions as the temperature of the system varies. At a very early time in the existence of
the universe (when it was about 10-36 seconds old, to be more precise), it is suspected that
an extremely important phase transition took place. The temperature at that time
corresponded to an energy of about 10 15 GeV.
According to GUT models, somewhere around there is the critical point where the
electromagnetic, weak, and nuclear forces have the same strength. Above that energy
(and earlier in time), there was just one unified force. Below that energy, the electroweak
force and the strong force become distinct. It is hypothesized that several Higgs fields
exist which account for this symmetry breaking. (They are different from the Higgs field
that breaks the electroweak symmetry at a much lower energy.)
20
21. 28
As the universe cooled through the critical temperature (about 10 ° K) at first nothing
happened. But the universe was not energetically stable. It was in a state resembling a
supersaturated solution or water cooled below the freezing point. This state has been
called the "false vacuum". Then a phase transition took place and -- in technical terms --
all hell broke loose. So much energy was released by the phase transition (just as occurs
when water freezes, but a lot more dramatically) that the universe quickly inflated in size
by a factor of 1050. This is the event known as "cosmic inflation".
Of course, it's still just a hypothesis. Yet it accounts for a number of features which can
be observed in the universe today, which we discuss elsewhere. Indeed, the evidence for
the correctness of this inflationary cosmology is good, and getting better all the time. The
evidence for inflation, in fact, is much better than that for the Higgs mechanism. It seems
pretty clear that inflation really did occur. It's less clear what the exact mechanism was.
But the best guess is that various Higgs fields which account for the breaking of GUT
symmetry were involved. If so, this is indirect evidence for the Higgs mechanism.
Magnetic monopoles
There is another complication related to the use of a Higgs field in grand unified theories.
In some of those theories, such as the one based on SU(5) symmetry, if the Higgs field
does exist, magnetic monopoles should have been created during the first 10-35 second
after the big bang -- during the phase transition responsible for cosmic inflation.
Magnetic monopoles would basically be constructed out of Higgs fields. Suppose there
are three such fields. At each point in space, each field is described by a single number,
since it's a scalar field. But with three fields, you need three numbers, so we have,
essentially, a three-component vector at each point. During the chaos of the phase
transition these vectors will tend to line up with each other at nearby points. But at a few
points, conditions may be so chaotic that no consistent direction can be established. A
magnetic monopole would develop at that point, with the magnetic field arising from the
interaction of the various Higgs fields.
A magnetic monopole is a type of 0-dimensional singularity. 1-dimensional and 2-
dimensional singularities could also develop under these conditions. Such singularities
are called "cosmic strings" and "domain walls", respectively. Objects of this sort are also
called, collectively, "topological defects". Just as when a liquid cools very rapidly to a
crystalline solid, different regions may crystallize in different orientations, resulting in a
discontinuous boundary between the regions. This boundary would become a domain
wall. The intersection of two walls would be a cosmic string. Such objects, if they exist,
would be exceedingly massive, and could have acted to "seed" the clumping of matter
when inflation ended.
Despite numerous experimental searches, magnetic monopoles have never been
conclusively observed. Cosmic strings and domain walls haven't either. However, this is
not necessarily a fatal problem, since inflation itself handily disposes of it. If inflation
occurred, all the monopoles that were created in the first instant would have been
dispersed so thoroughly in the subsequent inflation that they would be very sparsely
distributed in the present universe, and hence observation of them would be most
unlikely.
21
22. Gravity
If "empty" space is actually filled with Higgs fields, and hence with rather massive Higgs
particles, how is it that they apparently have no gravitational effect at all? Yes, there is
some sort of "dark matter" out there, apparently quite a bit of it. But physicists have ruled
out any contribution in the form of Higgs particles to this dark matter.
What's really going on here is concealed from us because we lack a viable quantum
theory of gravity. Indeed, it certainly makes sense that if Higgs particles really do explain
why particles of matter have mass, they there should be a very close connection with
gravity -- which is a theory all about the reciprocal effects of mass and space on each
other.
The cosmological constant, vacuum energy density
Although we do not yet possess a consistent quantum theory of gravity, some essential
properties of such a theory are known. If there is a quantum theory of gravity at all, it
must be mediated by a spin 2 boson, the graviton. The graviton must couple to anything
which has mass or (by the equivalence of mass and energy) anything which carries
energy, including the Higgs field.
Computations of this hypothetical coupling indicate that the cosmological constant --
which occurs in Einstein's fundamental equation of general relativity -- should have a
huge value far in excess of what is observed. In fact, the constant should be so large that
the entire universe would curl up to have a diameter less than a meter.
It's hard to see how this could be. Theoretical explanations are forced to assume that if
there were no Higgs field in the vacuum, then spacetime would have a huge negative
curvature precisely sized to cancel out almost exactly the positive curvature caused by the
Higgs field.
This does not feel like an aesthetically satisfying solution to the problem of the
cosmological constant. We must, presumably, wait for a satisfactory quantum theory of
gravity to really understand what goes on here.
Axions
A Higgs mechanism has been used to address a symmetry breaking problem quite
different from that of the electroweak theory. The symmetry involved is called CP, which
is a combination of two discrete symmetries: charge conjugation (C) and parity (P). There
are various interesting issues associated with these symmetries and a third -- time reversal
(T).
We discuss these issues elsewhere, but the basic situation is that there's a basic theorem
which states the combination of all three symmetries (CPT) is always preserved in nature.
That is, if you take any particle interaction and simultaneously apply all three symmetry
operations, the result will be another interaction that is exactly as likely to occur as the
original one. This is not necessarily the case if you take only two symmetries at a time,
however. CP symmetry, for instance, is often violated in weak interactions.
But with interactions involving the strong force, the probability of CP violation is
extremely small, possibly zero. There are two ways the strong force could violate CP
22
23. symmetry. (One is inherent in the equations of the theory, and the other follows from the
fact quarks have mass, which is a consequence of the electroweak force.) If the actual
violation is very small or zero, the two effects would cancel each other almost exactly,
which is curious. This situation is known as the "strong CP problem".
It turns out that the probability of CP violation in a strong force interaction can be
interpreted as the average value of a spinless quantum field, and the quantum of this field
is a particle called the "axion". The mathematics behind this result is basically the same
as that of the Higgs mechanism employed in the electroweak theory. It involves the
spontaneous breaking of a global symmetry called the Peccei-Quinn symmetry. The
Higgs field which causes this symmetry breaking may have been one that contributed to
the formation of domain walls.
Like Higgs particles, axions have not yet been observed. Unlike the Higgs particles,
however, they are expected to be extremely light -- less than 1/100 the mass of an
electron. In spite of their light weight, some theorists think axions could be so numerous
in the universe that they might be a prime candidate to constitute "dark matter".
Alternatives to the Higgs mechanism
In light of all that's been said about the importance of the Higgs field and the Higgs boson
to particle physics, would it be a disaster if (as appears possible) no Higgs particle is
actually found?
No. There are alternatives to the Higgs mechanism for explaining electroweak symmetry
breaking and particle mass, even though each has problems of its own. What we do know
is that if no Higgs boson exists, then there must be some other particles or forces -- of an
unknown type -- which play the same role. The symmetry breaking isn't simply an
"accident".
The typical form of such alternatives involves new particles and forces that bind together
in such a way as to produce a composite particle which behaves in essential ways like the
Higgs boson. Thus, although such a particle is not elementary, it still interacts with
known particles to slow them down and give them mass.
In any case, there would be no reason, based on current experimental evidence, to give up
the present standard model. It is not in conflict with experiment. There are certainly many
things which still require explanation. If something like the Higgs mechanism isn't true of
the real world, then there will be other causes. It just may take a little longer to find them.
Technicolor
One of the more noteworthy alternatives developed in the late 1970s was an entirely new
type of force called a "technicolor" force. The basic idea was to construct Higgs bosons
as composite particles -- like mesons and hadrons -- rather than assume they are
elementary particles like leptons and quarks. Essentially, this idea would hypothesize a
new force rather like the color force, but at a scale about a thosand times smaller. The
force was called technicolor because of the analogy with the color force.
23
24. In this scheme there would be a new set of spin 1/2 particles called (of course)
technifermions. A bound state of one of these with its antiparticle would be a spin 0
particle (a boson) analogous to a pion (which consists of a quark and an anti-quark,
bound by the color force). Naturally, this would be called a technipion. One such particle
would play the role of the Higgs boson in lending mass to the gauge bosons of the weak
force.
There are a variety of problems with technicolor theory in its various forms. Just to begin
with, while it explains the mass of the weak gauge bosons, it does not explain how
fermions acquire mass. Although the theory predicts a large number of additional
particles should exist, no evidence has been found for any of them, or any other effects of
the hypothetical technicolor force. There are many other problems of a techni-cal nature,
such as problems reproducing known phenomena of weak interactions. Efforts to extend
the theory to deal with such problems have only made it even more baroque and artificial
than it was to begin with.
In short, theories of this kind are still pursued by some who dislike the Higgs mechanism
for one reason or another. But deficiences and inelegance of such theories makes them
unpopular with most physicists.
Open questions
To sum it all up, physicists have pursued an understanding of the Higgs mechanism for
three related purposes:
To make the Yang-Mills theory of the electroweak force renormalizable and
mathematically consistent
To provide an explanation for the fact most known particles (except for photons
and gluons) have mass
To explain why spontaneous symmetry breaking occurs in the theory of the
electroweak force (and the asymmetry of the electroweak and the strong force in a
grand unified theory)
Theoretically, this effort has been successful on all counts. Experimentally, however,
until Higgs particles are actually observed, there remains substantial room for doubt.
Some of the causes for concern, aside from the lack of direct evidence for Higgs particles,
are as follows:
Introduction of new fields and particles to solve theoretical problems, without
independent evidence, seems a little ad hoc and contrived.
There is little explanation of what causes or generates the Higgs field itself.
(Perhaps another way of saying it is ad hoc.) This can be remedied with the help
of more ambitions theories, such as supersymmetry, but such theories are
themselves unverified.
Computations of the cosmological constant, assuming the existence of Higgs
fields, produce a result that is absurdly large.
24
25. Where are the Higgs particles?
This is the biggest concern at the moment. There should exist at least one Higgs boson
with a mass less than about 135 GeV under reasonable assumptions. Actual experiments
have already ruled out any Higgs bosons with masses close to this limit.
What are the theoretical implications if Higgs bosons can't be found?
The standard model would survive. The Higgs mechanism solves various problems for
the standard model, but it is not actually predicted by the model. That is, the mechanism
provides a sufficient, but not necessary, means of resolving the problems. The
nonexistence of Higgs bosons would not lead to any conflict between theory and
experimental results.
The standard model is essentially a theory of massless particles. The Higgs mechanism
provides a means of explaining the masses of particles, through their coupling with the
Higgs field, without sacrificing mathematical consistency of the standard model. If Higgs
particles do not actually exist, it may still be possible that there is a Higgs field which
provides for mass. If there is no Higgs field at all (which would greatly mitigate the
cosmological constant puzzle), then the explanation for particle mass would be a major
mystery, yet the standard model itself wouldn't fall.
What is the origin of mass?
Assuming that the theory of the Higgs mechanism is essentially correct, and that Higgs
particles are eventually observed, then all particles that "couple" with the Higgs field will
acquire a certain amount of mass. Here then is an explanation of where mass comes from.
In fact, none of the particles which occur in the standard model could have any mass that
does not come from coupling with the Higgs field if the theory is to be mathematically
consistent.
But even if all this is correct, there are still puzzles. Where do the masses of the Higgs
particles themselves come from? For any other particle, their observed mass is
proportional to the strength with which they couple to the Higgs particle. But what is it
that determines the strength of this coupling, and hence the specific mass of each
particle?
Most mysteriously of all, since gravity is preeminently the theory of the interaction of
mass with spacetime, how is gravity related to the Higgs mechanism?
What is the origin of the Higgs field itself?
We have noted above various ways in which this question can partially be answered. But
even if these answers are correct as far as they go, they don't seem like a "final" answer.
The situation is somewhat similar to that of questions like "where does space come
from?" or "where does time come from?" Physics may at some point be able to provide
answers to questions like this. Or at least, to questions of where the hypothetical single
unified force and the Higgs fields come from. (Ironically, though, the number of
necessary Higgs fields seems to increase even as the number of independent forces
decreases.)
If there is a Higgs mechanism, what solves the hierarchy problem?
25
26. Although the Higgs mechanism handles a number of puzzles fairly well it creates a rather
nasty problem of its own in grand unified theories, which unify three of the four
fundamental forces (excepting only gravity). This hierarchy problem, though rather
technical, doesn't seem capable of being dismissed as a mere aesthetic blemish. That
would entail a fantastically improbable circumstance. Supersymmetry offers a solution,
but supersymmetry itself currently lacks critical experimental evidence. If supersymmetry
is real, many puzzles are solved. In particular, we have a way to explain the origins of the
Higgs mechanism and to handle the hierarchy problem. But without supersymmetry, we
must find alternative solutions to both problems.
If there is a Higgs mechanism, what keeps the cosmological constant small?
The problem is, in short, that the Higgs mechanism is a bit too efficient. If the vacuum is
actually as full of nonzero Higgs fields as it seemingly must be to account for particle
mass and spontaneous symmetry breaking, then the cosmological constant (i. e., vacuum
energy density) must be enormous -- 120 orders of magnitude larger than what
observation seems to allow. Somehow, the effects of all the Higgs fields need to cancel
each other out almost (but not quite) entirely. It's a "fine tuning" situation that could
hardly happen by chance. Even supersymmetry does not appear to help out.”
“Surveys, overviews, tutorials
Higgs boson
Article from Wikipedia. See also Technicolor (physics).
Physics with ATLAS: The Higgs Particle
Overview of the role of the Higgs field in accounting for the mass of Standard
Model particles.
The Higgs Mechanism
An elementary explanation in cartoon form, based on ideas by David J. Miller.
The original brief article is here.
The Waldegrave Higgs Challenge
The best 5 one-page particle essays on Higgs physics, written in response to a
challenge by UK Science Minister, William Waldegrave.
My Life as a Boson: The Story of 'the Higgs'
A slide presentation by Peter Higgs, given at the 2001: A Spacetime Odyssey
conference.
The search for a standard model Higgs at the LHC
PhD thesis by Ulrik Egede. Detailed technical treatment of theoretical and
experimental Higgs physics. Look in particular at Higgs physics at the LHC.
The Linear Collider Opportunity
An essay by Gordon Kane on the need for construction of a new linear collider.
The essence of the matter is that an understanding of electroweak symmetry
breaking and the Higgs mechanism is a top priority in theoretical particle physics
and that a NLC will provide experimental data not obtainable any other way.
What exactly is the Higgs boson?
Question and answers from Scientific American's Ask the Experts section.
26
27. How does the Higgs boson affect string theory?
Question and answer (by Gordon Kane) from Scientific American's Ask the
Experts section.
What is a Goldstone Boson?
Goldstone bosons play a technical role in symmetry breaking via the Higgs
mechanism. The question is answered by Jeffrey Goldstone.
The Higgs Boson
Brief introductory information.
Higgs Boson: One Page Explanation
Five articles that explain the Higgs boson in a page or less.
Recommended references: Magazine/journal articles
Jiggling the Cosmic Ooze
Peter Weiss
Science News, March 10, 2001, pp. 152-154
The Higgs particle is thought to be responsible for the existence of mass in the
standard model. Detection of the Higgs particle is the highest priority objective in
current high-energy physics.
The Higgs Boson
Martinus J. G. Veltman
Scientific American, November 1986, pp. 76-84
Historically, physicists have developed the theory of Higgs fields for two different
reasons: to account for masses of elementary particles, and to give consistency to
the mathematics of elementary particle theory. Actual existence of Higgs fields
and bosons would solve some problems, but pose others.
Recommended references: Books
Abdus Salam -- Unification of Fundamental Forces
Cambridge University Press, 1990
An introductory lecture by one of the co-recipients of a Nobel prize for work on
the unification of the weak and electromagnetic forces. “
“How Particles Acquire Mass
By Mary and Ian Butterworth, Imperial College London, and Doris and Vigdor Teplitz,
Southern Methodist University, Dallas, Texas, USA.
The Higgs boson is a hypothesised particle which, if it exists, would give the mechanism
by which particles acquire mass.
Matter is made of molecules; molecules of atoms; atoms of a cloud of electrons about
one-hundred-millionth of a centimetre and a nucleus about one-hundred-thousandth the
size of the electron cloud. The nucleus is made of protons and neutrons. Each proton (or
neutron) has about two thousand times the mass of an electron. We know a good deal
about why the nucleus is so small. We do not know, however, how the particles get their
masses. Why are the masses what they are? Why are the ratios of masses what they are?
27
28. We can't be said to understand the constituents of matter if we don't have a satisfactory
answer to this question.
Peter Higgs has a model in which particle masses arise in a beautiful, but complex,
progression. He starts with a particle that has only mass, and no other characteristics,
such as charge, that distinguish particles from empty space. We can call his particle H. H
interacts with other particles; for example if H is near an electron, there is a force
between the two. H is of a class of particles called "bosons". We first attempt a more
precise, but non-mathematical statement of the point of the model; then we give
explanatory pictures.
In the mathematics of quantum mechanics describing creation and annihilation of
elementary particles, as observed at accelerators, particles at particular points arise from
"fields" spread over space and time. Higgs found that parameters in the equations for the
field associated with the particle H can be chosen in such a way that the lowest energy
state of that field (empty space) is one with the field not zero. It is surprising that the field
is not zero in empty space, but the result, not an obvious one, is: all particles that can
interact with H gain mass from the interaction.
Thus mathematics links the existence of H to a contribution to the mass of all particles
with which H interacts. A picture that corresponds to the mathematics is of the lowest
energy state, "empty" space, having a crown of H particles with no energy of their own.
Other particles get their masses by interacting with this collection of zero-energy H
particles. The mass (or inertia or resistance to change in motion) of a particle comes from
its being "grabbed at" by Higgs particles when we try and move it.
If particles do get their masses from interacting with the empty space Higgs field, then
the Higgs particle must exist; but we can't be certain without finding the Higgs. We have
other hints about the Higgs; for example, if it exists, it plays a role in "unifying" different
forces. However, we believe that nature could contrive to get the results that would flow
from the Higgs in other ways. In fact, proving the Higgs particle does not exist would be
scientifically every bit as valuable as proving it does.
These questions, the mechanisms by which particles get their masses, and the relationship
amongs different forces of nature, are major ones and so basic to having an understanding
of the constituents of matter and the forces among them, that it is hard to see how we can
make significant progress in our understanding of the stuff of which the earth is made
without answering them.
Last updated on 21st September 1998, by Dr S.L.Lloyd “
“Politics, Solid State and the Higgs
By David Miller Department of Physics and Astronomy, University College, London, UK.
1. The Higgs Mechanism
Imagine a cocktail party of political party workers who are uniformly distributed across
the floor, all talking to their nearest neighbours. The ex-Prime Minister enters and crosses
the room. All of the workers in her neighbourhood are strongly attracted to her and
28
29. cluster round her. As she moves she attracts the people she comes close to, while the ones
she has left return to their even spacing. Because of the knot of people always clustered
around her she acquires a greater mass than normal, that is she has more momentum for
the same speed of movement across the room. Once moving she is hard to stop, and once
stopped she is harder to get moving again because the clustering process has to be
restarted.
In three dimensions, and with the complications of relativity, this is the Higgs
mechanism. In order to give particles mass, a background field is invented which
becomes locally distorted whenever a particle moves through it. The distortion - the
clustering of the field around the particle - generates the particle's mass. The idea comes
directly from the physics of solids. instead of a field spread throughout all space a solid
contains a lattice of positively charged crystal atoms. When an electron moves through
the lattice the atoms are attracted to it, causing the electron's effective mass to be as much
as 40 times bigger than the mass of a free electron.
The postulated Higgs field in the vacuum is a sort of hypothetical lattice which fills our
Universe. We need it because otherwise we cannot explain why the Z and W particles
which carry the weak interactions are so heavy while the photon which carries
electromagnetic forces is massless.
2. The Higgs Boson
Now consider a rumour passing through our room full of uniformly spread political
workers. Those near the door hear of it first and cluster together to get the details, then
they turn and move closer to their next neighbours who want to know about it too. A
wave of clustering passes through the room. It may spread to all the corners or it may
form a compact bunch which carries the news along a line of workers from the door to
some dignitary at the other side of the room. Since the information is carried by clusters
of people, and since it was clustering that gave extra mass to the ex-Prime Minister, then
the rumour-carrying clusters also have mass.
The Higgs boson is predicted to be just such a clustering in the Higgs field. We will find
it much easier to believe that the field exists, and that the mechanism for giving other
particles is true, if we actually see the Higgs particle itself. Again, there are analogies in
the physics of solids. A crystal lattice can carry waves of clustering without needing an
electron to move and attract the atoms. These waves can behave as if they are particles.
They are called phonons and they too are bosons.
There could be a Higgs mechanism, and a Higgs field throughout our Universe, without
there being a Higgs boson. The next generation of colliders will sort this out.
Last updated on 30th August 1995, by Dr S.L.Lloyd “
“Of Particles, Pencils and Unification
By Tom Kibble Department of Physics, Imperial College, London, UK.
Theoretical physicists always aim for unification. Newton recognised that the fall of an
apple, the tides and the orbits of the planets as aspects of a single phenomenon, gravity.
29
30. Maxwell unified electricity, magnetism and light. Each synthesis extends our
understanding and leads eventually to new applications.
In the 1960s the time was ripe for a further step. We had a marvellously accurate theory
of electromagnetic forces, quantum electrodynamics, or QED, a quantum version of
Maxwell's theory. In it, electromagnetic forces are seen as due to the exchange between
electrically charged particles of photons, packets (or quanta) of electromagnetic waves.
(The distinction between particle and wave has disappeared in quantum theory.) The
"weak" forces, involved in radioactivity and in the Sun's power generation, are in many
ways very similar, save for being much weaker and restricted in range. A beautiful
unified theory of weak and electromagnetic forces was proposed in 1967 by Steven
Weinberg and Abdus Salam (independently). The weak forces are due to the exchange of
W and Z particles. Their short range, and apparent weakness at ordinary ranges, is
because, unlike the photon, the W and Z are, by our standards, very massive particles,
100 times heavier than a hydrogen atom.
The "electro-weak" theory has been convincingly verified, in particular by the discovery
of the W and Z at CERN in 1983, and by many tests of the properties. However, the
origin of their masses remains mysterious. Our best guess is the "Higgs mechanism" - but
that aspect of the theory remains untested.
The fundamental theory exhibits a beautiful symmetry between W, Z and photon. But this
is a spontaneously broken symmetry. Spontaneous symmetry breaking is a ubiquitous
phenomenon. For example, a pencil balanced on its tip shows complete rotational
symmetry - it looks the same from every side. - but when it falls it must do in some
particular direction, breaking the symmetry. We think the masses of the W and Z (and of
the electron) arise through a similar mechanism. It is thought there are "pencils"
throughout space, even in vacuum. (of course, these are not real physical pencils - they
represent the "Higgs field" - nor is their direction a direction in real physical space, but
the analogy is fairly close.) The pencils are all coupled together, so that they all tend to
fall in the same direction. Their presence in the vacuum influences waves travelling
through it. The waves have of course a direction in space, but they also have a "direction"
in this conceptual space. In some "directions", waves have to move the pencils too, so
they are more sluggish; those waves are the W and Z quanta.
The theory can be tested, because it suggests that there should be another kind of wave, a
wave in the pencils alone, where they are bouncing up and down. That wave is the Higgs
particle. Finding it would confirm that we really do understand the origin of mass, and
allow us to put the capstone on the electro-weak theory, filling in the few remaining gaps.
Once the theory is complete, we can hope to build further on it: a longer-term goal is a
unified theory involving also the "strong" interactions that bind protons and neutrons
together in atomic nuclei - and if we are really optimistic, even gravity, seemingly the
hardest force to bring into the unified scheme.
There are strong hints that a "grand unified" synthesis is possible, but the details are still
very vague. Finding the Higgs would give us very significant clues to the nature of that
greater synthesis.
30
31. Last updated on 30th August 1995, by Dr S.L.Lloyd “
“Ripples at the Heart of Physics
By Simon Hands Theory Division, CERN, Geneva, Switzerland.
The Higgs boson is an undiscovered elementary particle, thought to be a vital piece of the
closely fitting jigsaw of particle physics. Like all particles, it has wave properties akin to
those ripples on the surface of a pond which has been disturbed; indeed, only when the
ripples travel as a well defined group is it sensible to speak of a particle at all. In quantum
language the analogue of the water surface which carries the waves is called a field. Each
type of particle has its own corresponding field.
The Higgs field is a particularly simple one - it has the same properties viewed from
every direction, and in important respects is indistinguishable from empty space. Thus
physicists conceive of the Higgs field being "switched on", pervading all of space and
endowing it with "grain" like that of a plank of wood. The direction of the grain in
undetectable, and only becomes important once the Higgs' interactions with other
particles are taken into account. for instance, particles called vector bosons can travel
with the grain, in which case they move easily for large distances and may be observed as
photons - that is, particles of light that we can see or record using a camera; or against, in
which case their effective range is much shorter, and we call them W or Z particles.
These play a central role in the physics of nuclear reactions, such as those occurring in
the core of the sun.
The Higgs field enables us to view these apparently unrelated phenomenon as two sides
of the same coin; both may be described in terms of the properties of the same vector
bosons. When particles of matter such as electrons or quarks (elementary constituents of
protons and neutrons, which in turn constitute the atomic nucleus) travel through the
grain, they are constantly flipped "head-over-heels". this forces them to move more
slowly than their natural speed, that of light, by making them heavy. We believe the
Higgs field responsible for endowing virtually all the matter we know about with mass.
Like most analogies, the wood-grain one is persuasive but flawed: we should think of the
grain as not defining a direction in everyday three-dimensional space, but rather in some
abstract internal space populated by various kinds of vector boson, electron and quark.
The Higgs' ability to fill space with its mysterious presence makes it a vital component in
more ambitious theories of how the Universe burst into existence out of some initial
quantum fluctuation, and why the Universe prefers to be filled with matter rather than
anti-matter; that is, why there is something rather than nothing. To constrain these ideas
more rigorously, and indeed flesh out the whole picture, it is important to find evidence
for the Higgs field at first hand - in other words, find the boson. There are unanswered
questions: the Higgs' very simplicity and versatility, beloved of theorists, makes it hard to
pin down. How many Higgs particles are there? Might it/they be made from still more
elementary components? Most crucial, how heavy is it? Our current knowledge can only
put its mass roughly between that of an iron atom and three times that of a uranium atom.
This is a completely new form of matter about whose nature we still have only vague
31
32. hints and speculations and its discovery is the most exciting prospect in contemporary
particle physics.
Last updated on 21st September 1998, by Dr S.L.Lloyd “
APPENDIX 3
Some research and notes on convergence preliminaries by Abdul-Basit-
Khan:
1.
“Nowhere to Hide”
Question: How would you define “convergence” as it relates to
information technology?
Give an example.
Telecommunications and information technologies are converging in more than one way.
The very definition of information is changing. Telecommunications networks carried data
in bits per second (bit: our quantum of data) and computers were processing data as bytes,
according to older definitions. The new perspective is that both computational and
telecommunications systems are processing information, a fundamental of this universe, an
entity that has a quantity as well as a quality parameter. Information, however it may be
quantified (and qualified) is being processed and transferred between systems around the
world.
Cellular networks were separate from the world-wide-web, now they are supporting Internet
enabled devices as well. With the introduction of General Packet Radio Service Standards,
and the overlay on GSM networks of GPRS by cellular providers, and the interconnectivity
of fixed data networks to mobile networks by Gateway nodes, the very definition of
Customer Premises Equipment is changing. A hand held or a cell phone, is not only a
communication device but it is also a small computer, an information processing and
transferring system. With new Fixed Wireless Applications in the local loop, with a
convergent IP Phone/ Computer (internet device), consumer would find no difference
between telephony and computation. An example of this is ever increasing enhancements in
browsing /surfing capabilities of cell-phones.
With standards evolving such as ENUM standards, a unique phone number for every
subscriber in the world would identify him/her on any of his communication devices/media,
which ever one he sets his /her preference parameters to.
Whether the subscriber is logged into MSN Messenger on the desk-top, on the cell phone,
on the land-line or has the preferences set to any other personal communication device, such
as a blackberry, his/her unique telephone number will identify him on this grand unified
voice/data network of tomorrow. All networks will converge, where not data alone, or voice
alone, but “INFORMATION” is transmitted.
32
33. These are the convergence trends in information technology and telecommunications, where
there is no data or voice subscriber, but it’s a unified network with a unique identifier, which
is more than an IP address, more than a telephone number, to locate the
end-user on any communications medium of choice.
2.
Question: Do you agree or disagree that the desktop is dead? Why?
Desktop is not completely dead, but it is mutating, changing and
evolving. With Voice over IP as the new mode of unified
communication, and ENUM standards evolving, the nature of Customer
Premises Equipment is evolving.
The desktop with a hard-drive and large, permanent memory has traditionally been used as
the repository of personal information for individual consumers and users of information.
As illustrated in the “Mirror Worlds”, Internet as the world’s largest distributed information
system is taking over many of the functions of the desktop. Distributed databases,
information storage and retrieval systems and transaction-based systems, do not require large
storage memories on the desktop any more. The constraint is now the speed of the
communications channel, and the efficiency of the queries.
Many of us, use contact management software such as PLAXO to store contact information
on a central server, to be retrieved in an instance on the desktop. Often, we use hot-mail to
store important e-mails and to refer to them on a later date. For this course all the
information exchange, submissions and grades, lie on a server at Humber. The desktop’s
functionality has totally changed. We use the desktop to view information saved on remote
servers accessible by the Internet. In the current telecommunications world, an example of
this kind of phone-desktop hybrid is the Bell’s Vista 350 telephone. Stock quotes, weather
reports, all accessible by the touch of a finger, on buttons configured based on preferences.
With Voice over IP and convergent technologies evolving, the speed and bandwidth of
communications channels will be much enhanced. Voice over IP will lead to new web
devices, which would not require a large hard-drive or memory. The functionality of these
devices will be only to retrieve and display information. There will be enhanced bookmarks
and new desktop software (i.e. Scope ware) to manage trends and mimic usage patterns and
behaviour of individual consumers. The emphasis will be on faster and more organized
information retrieval and display. The management of distributed information, retrieved on
the desktop and the pointers to this information would be dictated by the usage patterns. An
interesting device currently available the web-racer mouse, which on the click of buttons,
surfs the preferred Internet sites.
33
34. With Voice over IP standards evolving and convergence in the computer-telephony worlds,
the desktop is not going to die. Rather it is going to mutate into a specialized and customized
user interface for globally distributed storage media.
34