The document discusses constraints on Higgs-portal models of weakly interacting massive particles (WIMPs) from Large Hadron Collider (LHC) data. It analyzes limits from various LHC search channels, including vector boson fusion (VBF), mono-jet, and mono-Z, on heavy Higgs-portal WIMPs with masses between 100 GeV to 350 GeV. The VBF channel provides the strongest constraints, excluding coupling strengths greater than approximately 0.5 for vector WIMPs in this mass range. LHC searches can probe WIMPs with very small predicted relic abundances from thermal freeze-out.
This document discusses limits on Higgs-portal weakly interacting massive particles (WIMPs) from LHC data. It begins by introducing Higgs portal models that allow interactions between the standard model and hidden sector particles through the Higgs field. It then reviews current constraints on Higgs portal WIMPs from relic abundance, direct detection, and collider searches. The document focuses on further exploring Higgs portal models using LHC data, particularly invisible Higgs decays, mono-jet/Z searches, and vector boson fusion processes. Formulas for calculating production cross sections of WIMP pairs are provided. The analysis details how various LHC searches can set limits on the coupling parameters in different Higgs portal models.
- The document discusses a 750 GeV diphoton excess observed by the ATLAS and CMS experiments at the LHC and constraints from dark matter experiments.
- It analyzes the possibility that the excess could be explained by a new scalar particle that decays into photons and also invisibly into dark matter.
- A simple effective model is presented where the scalar interacts with the Standard Model through couplings to electroweak gauge bosons and couples to dark matter through a renormalizable interaction.
- Parameter scans of the model are shown and constraints from LHC searches, gamma ray lines, cosmic rays, and direct detection experiments are considered to identify viable regions of parameter space.
Quantum networks with superconducting circuits and optomechanical transducersOndrej Cernotik
Connecting distant chips in a quantum network is one of biggest challenges for superconducting quantum computers. Superconducting systems operate at microwave frequencies; transmission of microwave signals through room-temperature quantum channels is impossible due to the omnipresent thermal noise. I will show how two well-known experimental techniques—parity measurements on superconducting systems and optomechanical force sensing—can be combined to generate entanglement between two superconducting qubits through a room-temperature environment. An optomechanical transducer acting as a force sensor can be used to determine the state of a superconducting qubit. A joint readout of two qubits and postselection can lead to entanglement between the qubits. From a conceptual perspective, the transducer senses force exerted by a quantum object, entering a new paradigm in force sensing. In a typical scenario, the force sensed by an optomechanical system is classical. I will argue that the coherence between different states of the qubit (which give rise to different values of the force) can be preserved during the measurement, making it an important resource for quantum communication.
Optimization of parameter settings for GAMG solver in simple solver, OpenFOAM...Masashi Imano
The document summarizes presentations given by Masashi Imano of OCAEL Co. Ltd. at OpenFOAM study meetings for beginners in Kansai and Kanto, Japan. It discusses optimizing parameters for the GAMG solver in OpenFOAM, including the number of cells in the coarsest grid level. Testing on a 16-node SGI cluster showed the optimal range was 32-1024 cells. It also discusses parameters like merge levels, number of smoothing sweeps, and their effect on solver speed for different node counts. The document provides guidance on selecting parameters for the GAMG solver in OpenFOAM simulations.
Trigger Workshop material CERN Anton OsikaAnton Osika
The document discusses different Level 1 trigger selections for fat jets that could be alternatives to the baseline single jet trigger. It analyzes selections based on the sum of energy of closeby jets, HT constructed from low-energy jets, and multijet triggers. HT(C)200, which requires the scalar sum of transverse energies of jets with ET > 20 GeV and |η| < 2.5(3.2) to be above 200 GeV, seems to have the best overall performance compared to the other selections for the models considered. Combining selections could help recover some inefficiencies at lower event filter thresholds.
The document describes the Fluorescence detector Array of Single-pixel Telescopes (FAST) project. Some key points:
- FAST will consist of an array of single-pixel telescopes to detect ultra-high energy cosmic rays via fluorescence technique.
- A prototype was constructed and tested in 2015-2017. Data was collected at the Telescope Array site over 21 km and compared to simulations.
- The project aims to build a larger array with more telescopes that could achieve 4 times the exposure of the Telescope Array or 10 times that of the Pierre Auger Observatory. This would allow studies of cosmic rays above 10^19.5 eV.
- An update on the project's progress in design
This document summarizes the EXO (Enriched Xenon Observatory) experiment which aims to search for neutrinoless double beta decay in 136Xe. It describes the EXO-200 detector which contains 200kg of xenon enriched to 80% 136Xe. The detector measures both ionization and scintillation signals to achieve high energy resolution. The document discusses the goals of EXO-200 to search for 0νββ decay, measure the 2νββ half-life, and understand operating a large liquid xenon detector. It also describes plans to identify barium daughters from double beta decays using laser spectroscopy to achieve a background-free experiment.
Optimization of relaxation factor for simple solver, OpenFOAM Study Meeting f...Masashi Imano
The document summarizes Masashi Imano's presentation on optimizing relaxation factors for simple solvers in OpenFOAM. Imano discusses benchmark test cases for pedestrian wind environment simulations and presents the calculation conditions for Case E which models building complexes in Niigata with a Cartesian mesh, interpolated inflow boundary conditions, and standard k-epsilon turbulence model. Validation results for Case E show good agreement with experimental data. Imano also discusses test cluster resources for running simulations and focuses on adjusting relaxation factors for pressure and velocity in the SIMPLE and SIMPLEC solvers to accelerate convergence and reduce execution time per iteration while maintaining solution accuracy.
This document discusses limits on Higgs-portal weakly interacting massive particles (WIMPs) from LHC data. It begins by introducing Higgs portal models that allow interactions between the standard model and hidden sector particles through the Higgs field. It then reviews current constraints on Higgs portal WIMPs from relic abundance, direct detection, and collider searches. The document focuses on further exploring Higgs portal models using LHC data, particularly invisible Higgs decays, mono-jet/Z searches, and vector boson fusion processes. Formulas for calculating production cross sections of WIMP pairs are provided. The analysis details how various LHC searches can set limits on the coupling parameters in different Higgs portal models.
- The document discusses a 750 GeV diphoton excess observed by the ATLAS and CMS experiments at the LHC and constraints from dark matter experiments.
- It analyzes the possibility that the excess could be explained by a new scalar particle that decays into photons and also invisibly into dark matter.
- A simple effective model is presented where the scalar interacts with the Standard Model through couplings to electroweak gauge bosons and couples to dark matter through a renormalizable interaction.
- Parameter scans of the model are shown and constraints from LHC searches, gamma ray lines, cosmic rays, and direct detection experiments are considered to identify viable regions of parameter space.
Quantum networks with superconducting circuits and optomechanical transducersOndrej Cernotik
Connecting distant chips in a quantum network is one of biggest challenges for superconducting quantum computers. Superconducting systems operate at microwave frequencies; transmission of microwave signals through room-temperature quantum channels is impossible due to the omnipresent thermal noise. I will show how two well-known experimental techniques—parity measurements on superconducting systems and optomechanical force sensing—can be combined to generate entanglement between two superconducting qubits through a room-temperature environment. An optomechanical transducer acting as a force sensor can be used to determine the state of a superconducting qubit. A joint readout of two qubits and postselection can lead to entanglement between the qubits. From a conceptual perspective, the transducer senses force exerted by a quantum object, entering a new paradigm in force sensing. In a typical scenario, the force sensed by an optomechanical system is classical. I will argue that the coherence between different states of the qubit (which give rise to different values of the force) can be preserved during the measurement, making it an important resource for quantum communication.
Optimization of parameter settings for GAMG solver in simple solver, OpenFOAM...Masashi Imano
The document summarizes presentations given by Masashi Imano of OCAEL Co. Ltd. at OpenFOAM study meetings for beginners in Kansai and Kanto, Japan. It discusses optimizing parameters for the GAMG solver in OpenFOAM, including the number of cells in the coarsest grid level. Testing on a 16-node SGI cluster showed the optimal range was 32-1024 cells. It also discusses parameters like merge levels, number of smoothing sweeps, and their effect on solver speed for different node counts. The document provides guidance on selecting parameters for the GAMG solver in OpenFOAM simulations.
Trigger Workshop material CERN Anton OsikaAnton Osika
The document discusses different Level 1 trigger selections for fat jets that could be alternatives to the baseline single jet trigger. It analyzes selections based on the sum of energy of closeby jets, HT constructed from low-energy jets, and multijet triggers. HT(C)200, which requires the scalar sum of transverse energies of jets with ET > 20 GeV and |η| < 2.5(3.2) to be above 200 GeV, seems to have the best overall performance compared to the other selections for the models considered. Combining selections could help recover some inefficiencies at lower event filter thresholds.
The document describes the Fluorescence detector Array of Single-pixel Telescopes (FAST) project. Some key points:
- FAST will consist of an array of single-pixel telescopes to detect ultra-high energy cosmic rays via fluorescence technique.
- A prototype was constructed and tested in 2015-2017. Data was collected at the Telescope Array site over 21 km and compared to simulations.
- The project aims to build a larger array with more telescopes that could achieve 4 times the exposure of the Telescope Array or 10 times that of the Pierre Auger Observatory. This would allow studies of cosmic rays above 10^19.5 eV.
- An update on the project's progress in design
This document summarizes the EXO (Enriched Xenon Observatory) experiment which aims to search for neutrinoless double beta decay in 136Xe. It describes the EXO-200 detector which contains 200kg of xenon enriched to 80% 136Xe. The detector measures both ionization and scintillation signals to achieve high energy resolution. The document discusses the goals of EXO-200 to search for 0νββ decay, measure the 2νββ half-life, and understand operating a large liquid xenon detector. It also describes plans to identify barium daughters from double beta decays using laser spectroscopy to achieve a background-free experiment.
Optimization of relaxation factor for simple solver, OpenFOAM Study Meeting f...Masashi Imano
The document summarizes Masashi Imano's presentation on optimizing relaxation factors for simple solvers in OpenFOAM. Imano discusses benchmark test cases for pedestrian wind environment simulations and presents the calculation conditions for Case E which models building complexes in Niigata with a Cartesian mesh, interpolated inflow boundary conditions, and standard k-epsilon turbulence model. Validation results for Case E show good agreement with experimental data. Imano also discusses test cluster resources for running simulations and focuses on adjusting relaxation factors for pressure and velocity in the SIMPLE and SIMPLEC solvers to accelerate convergence and reduce execution time per iteration while maintaining solution accuracy.
This document discusses the sensitivity of a next generation reactor neutrino experiment to determine the neutrino mass hierarchy. It analyzes factors that affect the sensitivity such as baseline length, neutrino flux, detector size, energy resolution, and uncertainties in the neutrino oscillation parameters. With a 16.5GW thermal power reactor source, 18 kiloton detector, and 5 years of data taking, a reactor neutrino experiment could determine the mass hierarchy at over 3-sigma sensitivity if the energy resolution is less than 3% and 0.5% respectively. Other systematic uncertainties from multiple reactor sites, energy scale uncertainty, and oscillation parameters must also be carefully controlled to achieve this sensitivity.
ACADGILD:: FRONTEND LESSON -Ruby on rails vs groovy on railsPadma shree. T
The document compares the Ruby on Rails and Groovy on Rails frameworks. Ruby on Rails is built on the Ruby programming language and uses an active record approach, while Groovy on Rails is built on Groovy and Java and uses a domain-oriented approach. The document discusses factors to consider when choosing between the frameworks like skills, community support, and deployment options. It notes that Ruby on Rails is more established while Groovy on Rails may be easier for Java developers.
The document is a curriculum vitae for an individual seeking employment. It includes sections on objective, academic qualifications, computer skills, work experience, duties and responsibilities from past roles, personal information, languages known, and a declaration. The candidate has over 15 years of work experience in administrative, procurement, and secretarial roles in Saudi Arabia and seeks a position where they can further develop their skills and contribute to an organization.
This document lists various gift items including Chinese fans, coasters, mugs, wet napkins, caps, water bottles, stress balls, key chains, flash memory, CD cases, mouse sets, mouse pads, computer bags, card holders, pens, pen holders, and sun shades. It provides contact information to get more details on these gift item samples.
Professional Procurement Training Helps To Increase Your SkillPeter Desilva
Blue Ocean Academy in Dubai offers procurement training courses to help professionals and students improve their skills in purchasing and procurement management and achieve a broad education in these principles. The training increases participants' skill levels. More details can be found at the URL provided.
This document discusses limits on Higgs portal dark matter models from LHC data. It analyzes three Higgs portal models - scalar, vector and antisymmetric tensor portals - where a dark matter particle interacts with the Standard Model Higgs boson. It summarizes current constraints from relic abundance, direct detection and collider searches. It then focuses on recasting LHC searches for Higgs invisible decays, monojet and monoz decays to place limits on production of dark matter particle pairs through the Higgs portal in these models.
This document discusses constraints on Higgs portal dark matter models from LHC invisible searches. It examines limits from vector boson fusion (VBF) searches for invisible Higgs decays, as well as mono-jet and mono-Z searches. The author calculates cross sections for vector, scalar and tensor dark matter particles coupling to the Higgs to compare with experimental limits from these searches. The goal is to investigate the constraints on heavier Higgs portal dark matter models from current LHC data.
Talk @ Beyond the Standard Model in Okinawa 2016 2016.03.02Yoshitaro Takaesu
The document discusses a 750 GeV resonance observed by ATLAS and CMS, and considers its properties and decay modes. It analyzes the ability of the HL-LHC and potential 1 TeV photon collider to detect the different decay modes, depending on the model parameters. A 1 TeV photon collider could help explore regions beyond the LHC's reach, such as decay modes to electroweak boson pairs or gluons if their branching ratios are small. The collider could detect various decay modes with 1 ab-1 of data or less, helping to further constrain the properties of the unknown 750 GeV particle.
Theoretic and experimental investigation of gyro-BWOPei-Che Chang
This document outlines a student's theoretical and experimental investigation into improving the efficiency and bandwidth of gyrotrons. It discusses using a tapered waveguide to enhance efficiency by deeper electron bunching. Simulation results show higher efficiency and frequency tunability over a range of currents and magnetic fields. The document also describes an experiment using a 95kV, 5A gyrotron with a magnetic field taper that demonstrated oscillation on the lowest order axial mode with over 30% 3dB bandwidth tunability.
The document discusses top quark physics that can be studied at the Large Hadron Collider (LHC). It outlines several measurements that could be made with early LHC data, including the observation of top quark production which would indicate the detectors are functioning properly. With 10 inverse picobarns of data, the top quark production cross section could be measured to around 10% precision using dilepton and semileptonic decay channels. The document also discusses issues that may affect early measurements and techniques for improving the purity of the top quark signal in kinematic selections.
IC Design of Power Management Circuits (IV)Claudia Sin
by Wing-Hung Ki
Integrated Power Electronics Laboratory
ECE Dept., HKUST
Clear Water Bay, Hong Kong
www.ee.ust.hk/~eeki
International Symposium on Integrated Circuits
Singapore, Dec. 14, 2009
This document describes a study measuring the fraction of J/ψ mesons originating from Υ(1P), Υ(2P), and Υ(3P) decays as a function of pT(J/ψ) using data collected by the LHCb experiment at center-of-mass energies of 7 and 8 TeV. The analysis involves determining yields of J/ψ mesons and yields from Υ decays to J/ψ in different pT bins. Monte Carlo simulations are used to calculate efficiencies and compare data distributions. Results include improved precision on previous LHCb measurements of these fractions and a measurement of the Υ1(3P) mass.
This document discusses MOSFET device physics and modeling. It begins with an overview of MOSFET operation and important equations. It then discusses modeling the current-voltage characteristics using gradual channel approximation. The document also covers subthreshold behavior, mobility effects, threshold voltage control, and more complete models that include both drift and diffusion currents.
This document discusses MOSFET device physics and modeling. It begins with an overview of MOSFET operation and important equations. It then discusses modeling the current-voltage characteristics using gradual channel approximation. The document also covers threshold voltage control, mobility effects, sub-threshold behavior, and more complete models that include both drift and diffusion currents.
This document discusses the sensitivity of a next generation reactor neutrino experiment to determine the neutrino mass hierarchy. It analyzes factors that affect the sensitivity such as baseline length, neutrino flux, detector size, energy resolution, and uncertainties in the neutrino oscillation parameters. With a 16.5GW thermal power reactor source, 18 kiloton detector, and 5 years of data taking, a reactor neutrino experiment could determine the mass hierarchy at over 3-sigma sensitivity if the energy resolution is less than 3% and 0.5% respectively. Other systematic uncertainties from multiple reactor sites, energy scale uncertainty, and oscillation parameters must also be carefully controlled to achieve this sensitivity.
ACADGILD:: FRONTEND LESSON -Ruby on rails vs groovy on railsPadma shree. T
The document compares the Ruby on Rails and Groovy on Rails frameworks. Ruby on Rails is built on the Ruby programming language and uses an active record approach, while Groovy on Rails is built on Groovy and Java and uses a domain-oriented approach. The document discusses factors to consider when choosing between the frameworks like skills, community support, and deployment options. It notes that Ruby on Rails is more established while Groovy on Rails may be easier for Java developers.
The document is a curriculum vitae for an individual seeking employment. It includes sections on objective, academic qualifications, computer skills, work experience, duties and responsibilities from past roles, personal information, languages known, and a declaration. The candidate has over 15 years of work experience in administrative, procurement, and secretarial roles in Saudi Arabia and seeks a position where they can further develop their skills and contribute to an organization.
This document lists various gift items including Chinese fans, coasters, mugs, wet napkins, caps, water bottles, stress balls, key chains, flash memory, CD cases, mouse sets, mouse pads, computer bags, card holders, pens, pen holders, and sun shades. It provides contact information to get more details on these gift item samples.
Professional Procurement Training Helps To Increase Your SkillPeter Desilva
Blue Ocean Academy in Dubai offers procurement training courses to help professionals and students improve their skills in purchasing and procurement management and achieve a broad education in these principles. The training increases participants' skill levels. More details can be found at the URL provided.
This document discusses limits on Higgs portal dark matter models from LHC data. It analyzes three Higgs portal models - scalar, vector and antisymmetric tensor portals - where a dark matter particle interacts with the Standard Model Higgs boson. It summarizes current constraints from relic abundance, direct detection and collider searches. It then focuses on recasting LHC searches for Higgs invisible decays, monojet and monoz decays to place limits on production of dark matter particle pairs through the Higgs portal in these models.
This document discusses constraints on Higgs portal dark matter models from LHC invisible searches. It examines limits from vector boson fusion (VBF) searches for invisible Higgs decays, as well as mono-jet and mono-Z searches. The author calculates cross sections for vector, scalar and tensor dark matter particles coupling to the Higgs to compare with experimental limits from these searches. The goal is to investigate the constraints on heavier Higgs portal dark matter models from current LHC data.
Talk @ Beyond the Standard Model in Okinawa 2016 2016.03.02Yoshitaro Takaesu
The document discusses a 750 GeV resonance observed by ATLAS and CMS, and considers its properties and decay modes. It analyzes the ability of the HL-LHC and potential 1 TeV photon collider to detect the different decay modes, depending on the model parameters. A 1 TeV photon collider could help explore regions beyond the LHC's reach, such as decay modes to electroweak boson pairs or gluons if their branching ratios are small. The collider could detect various decay modes with 1 ab-1 of data or less, helping to further constrain the properties of the unknown 750 GeV particle.
Theoretic and experimental investigation of gyro-BWOPei-Che Chang
This document outlines a student's theoretical and experimental investigation into improving the efficiency and bandwidth of gyrotrons. It discusses using a tapered waveguide to enhance efficiency by deeper electron bunching. Simulation results show higher efficiency and frequency tunability over a range of currents and magnetic fields. The document also describes an experiment using a 95kV, 5A gyrotron with a magnetic field taper that demonstrated oscillation on the lowest order axial mode with over 30% 3dB bandwidth tunability.
The document discusses top quark physics that can be studied at the Large Hadron Collider (LHC). It outlines several measurements that could be made with early LHC data, including the observation of top quark production which would indicate the detectors are functioning properly. With 10 inverse picobarns of data, the top quark production cross section could be measured to around 10% precision using dilepton and semileptonic decay channels. The document also discusses issues that may affect early measurements and techniques for improving the purity of the top quark signal in kinematic selections.
IC Design of Power Management Circuits (IV)Claudia Sin
by Wing-Hung Ki
Integrated Power Electronics Laboratory
ECE Dept., HKUST
Clear Water Bay, Hong Kong
www.ee.ust.hk/~eeki
International Symposium on Integrated Circuits
Singapore, Dec. 14, 2009
This document describes a study measuring the fraction of J/ψ mesons originating from Υ(1P), Υ(2P), and Υ(3P) decays as a function of pT(J/ψ) using data collected by the LHCb experiment at center-of-mass energies of 7 and 8 TeV. The analysis involves determining yields of J/ψ mesons and yields from Υ decays to J/ψ in different pT bins. Monte Carlo simulations are used to calculate efficiencies and compare data distributions. Results include improved precision on previous LHCb measurements of these fractions and a measurement of the Υ1(3P) mass.
This document discusses MOSFET device physics and modeling. It begins with an overview of MOSFET operation and important equations. It then discusses modeling the current-voltage characteristics using gradual channel approximation. The document also covers subthreshold behavior, mobility effects, threshold voltage control, and more complete models that include both drift and diffusion currents.
This document discusses MOSFET device physics and modeling. It begins with an overview of MOSFET operation and important equations. It then discusses modeling the current-voltage characteristics using gradual channel approximation. The document also covers threshold voltage control, mobility effects, sub-threshold behavior, and more complete models that include both drift and diffusion currents.
Ch6 lecture slides Chenming Hu Device for ICChenming Hu
The MOSFET is the building block of modern integrated circuits like memory chips and microprocessors. It has a small size, high speed, and low power consumption, making it suitable for these applications. The MOSFET structure consists of a gate, source, and drain above a channel. When voltage is applied to the gate, an electric field forms a channel between the source and drain through which current can flow. MOSFETs come in N-type and P-type varieties and are combined in complementary pairs as CMOS devices for digital circuits. The speed and power consumption of MOSFET-based circuits can be improved by increasing the drive current and reducing the threshold voltage and parasitic capacitances.
Ch5 lecture slides Chenming Hu Device for ICChenming Hu
This document summarizes key concepts about MOS capacitors including:
1) The structure and operation of an MOS capacitor including accumulation, depletion, and inversion regions depending on the gate voltage Vg relative to the flat-band voltage Vfb and threshold voltage Vt.
2) Equations relating surface potential φs, depletion width Wdep, oxide capacitance Cox, and inversion charge Qinv to the applied gate voltage Vg.
3) Sources of threshold voltage Vt variation including body doping, oxide thickness Tox, and fixed oxide charge Qox.
4) Effects of poly-silicon gate depletion on the effective oxide thickness and inversion charge Qinv.
This document describes the fabrication and characterization of vertically stacked silicon nanowire field effect transistors for biosensing applications. A process using BOSCH etching and sacrificial oxidation is developed to create arrays of vertically stacked silicon nanowires with diameters less than 40 nm, lengths over 1 micron, and densities up to 10 nanowires per micron. The nanowires are electrically characterized in dry and liquid conditions, showing good electrostatic control in liquid with subthreshold swings of 100 mV/decade and on-currents over 2 mA/micron. The vertically stacked nanowire design and fabrication process aim to increase the sensitivity of field effect transistor biosensors.
Los días 22 y 23 de junio de 2016 organizamos en la Fundación Ramón Areces un simposio internacional sobre 'Materiales bidimensionales: explorando los límites de la física y la ingeniería'. En colaboración con el Massachusetts Institute of Technology (MIT), científicos de este prestigioso centro de investigación mostraron las propiedades únicas de materiales como el grafeno, de solo un átomo de espesor, y al mismo tiempo más resistente que el acero y mucho más ligero.
This document summarizes research on tri-layered magnetoelectric composites containing Metglas and various piezoelectric crystals. Key findings include:
1) Composites of Metglas/lithium niobate (LNO) and Metglas/gallium phosphate (GPO) exhibited direct magnetoelectric voltage coefficients of up to 0.95 V/(cm·Oe) and 0.24 V/(cm·Oe), respectively.
2) Under electromechanical resonance, the Metglas/LNO composite showed a very large coefficient of 250 V/(cm·Oe), while the Metglas/GPO composite showed a maximum of 23 V/(
Ch7 lecture slides Chenming Hu Device for ICChenming Hu
The document discusses technology scaling of MOSFETs used in integrated circuits. Key points include:
1) Feature sizes are reduced by around 30% with each new technology node to improve cost, speed, and power consumption.
2) Scaling challenges include increased subthreshold leakage current and threshold voltage roll-off.
3) Innovations such as high-k dielectrics, metal gates, strained silicon, and retrograde well doping help address these challenges and allow scaling to continue.
4) Variations in manufacturing must also be considered and techniques like multiple threshold voltages and supply voltages are used.
This document discusses line reactance, zero sequence reactance, and mutual zero sequence reactance in power transmission lines. It begins by explaining the basics of inductance and deriving the formula to calculate the inductance of single and three-phase transmission lines. It then shows how to represent the inductance of a three-phase line as a matrix and introduces the concepts of zero sequence impedance and mutual zero sequence impedance. The document uses symmetrical component analysis to derive the zero sequence impedance matrix and provides an example calculation for a specific 132kV transmission line. It explains how to use the zero sequence reactance value to estimate short circuit currents.
This document summarizes charged pion production measurements from the T2K experiment. It discusses the need to understand pion production for T2K's oscillation analysis and as a background. It then presents recent T2K measurements of charged-current single pion production, including production in water targets using the ND280 detector and production in carbon targets using both ND280 and INGRID. The water results show suppression compared to predictions in specific kinematic regions.
Lecture21-BJT ExamplesAnd Pspice based sSim.pdfBalraj Singh
This document discusses a lecture on bipolar junction transistors (BJT) that includes a hand example and SPICE simulation example. The hand example analyzes the small-signal model of a BJT circuit with given component values and biases. The SPICE example simulates the circuit to determine voltage gain via transient and AC analyses and examines the effects of a capacitor. The document concludes that the maximum output voltage limits of a BJT circuit are set by ensuring the transistor remains in forward active mode and does not go into cutoff or saturation modes.
Similar to LHC limits on the Higgs-portal WIMPs (20)
This document discusses the use of recursive relations in MadGraph to more efficiently calculate multi-parton QCD processes. Recursive relations reduce the number of amplitude terms that must be calculated by relating higher-point terms to sums of lower-point terms. The author implemented off-shell recursive relation subroutines in the HELAS library to allow MadGraph to calculate processes with 5 or more gluons in the final state. Future work includes applying these techniques to processes with quarks and performing phase space integration and event generation.
MadGraph is an automatic amplitude generator that is useful for QCD processes but has limitations for processes with many final state particles. The file size of codes generated by MadGraph becomes very large for processes with more than 5 final state particles due to the large size of the color matrix and dual amplitudes. The author overcame this limitation by modifying MadGraph to generate smaller code files for each dual amplitude, calculating the color matrix on-the-fly rather than including it, and compiling the smaller files. This allowed simulation of multi-gluon processes with more than 5 gluons.
This document discusses extending the ability of MadGraph to simulate multi-jet events for new physics searches at the LHC. It proposes dividing the MadGraph code into smaller pieces by color decomposition to allow compilation on standard PCs. Higher-order corrections are included by evaluating needed color flows and reweighting events. Results are shown for total cross sections and distributions of gluonic processes generated at leading order with MadGraph.
This document discusses methods for generating multi-jet events with MadGraph. It describes limitations of current matrix element generators in simulating processes with more than 5 jets. The document then outlines strategies used in MadGraph to overcome its limitations and allow for multi-jet event generation. These include dividing amplitudes, speeding up evaluation with off-shell recursive relations, and reorganizing color summation using a 1/N_c expansion.
- The document examines the W+4 jets background to the top quark asymmetry observed at the Tevatron.
- Event generation is performed with MadGraph 5 using CTEQ6L1 PDFs and a factorization scale of 20 GeV.
- Kinematic distributions of the W+4 jets process are studied after event selection and reconstruction of top quarks.
- A large forward-backward asymmetry is seen in the W+4 jets background, suggesting background contaminations may be higher than expected and contributing to the observed anomaly.
This document provides a status report on multi-jet event generation. It discusses the importance of multi-jet signatures for new physics models and limitations of current event generators in generating processes with many final state particles. It then summarizes strategies developed to overcome these limitations, including using recursive relations to speed up amplitude evaluations, reorganizing color summations, and employing the leading color approximation and 1/N_c expansion to simplify calculations. Tests show the new methods agree with exact calculations and allow efficient generation of events with higher jet multiplicities than before.
This document discusses a method for generating multi-jet events using MadGraph. It notes that multi-jet signatures are important for new physics models but current matrix element generators cannot simulate more than 5 jets. The method uses off-shell recursive relations to speed up amplitude evaluations and reorganizes color summation using a 1/Nc expansion. It generates leading color approximation events and includes higher order corrections to event weights. Results are shown for gluonic processes demonstrating the ability to generate leading order multi-jet events.
This document discusses the W+4 jets background to the top quark asymmetry observed at the Tevatron. It finds that W+4 jets events can exhibit a large forward-backward asymmetry at the matrix element level. However, in background-enriched samples with zero b-tagged jets, no significant asymmetry is observed. The discrepancy may be due to biases introduced in the reconstruction of neutrino rapidity when requiring one or more b-tagged jets. Accounting properly for uncertainties in the neutrino rapidity determination could help reconcile measurements of the asymmetry between b-tagged and non-b-tagged event samples.
This document investigates the W+4 jets background to the top quark forward-backward asymmetry at the Tevatron. It finds a large forward-backward asymmetry in the W+4 jets background simulation. The choice of neutrino rapidity in the event reconstruction can significantly impact the asymmetry. While a simple study, it provides hints that the W+4 jets process could contribute to the observed top quark asymmetry.
The document discusses determining the neutrino mass hierarchy using a future medium-baseline reactor neutrino experiment. It finds that an optimal baseline length of around 50 km is required to distinguish the normal and inverted hierarchies. An energy resolution of less than 3% is needed to achieve sufficient sensitivity, and systematic errors in the resolution parameterization must be controlled to below 1% to maintain sensitivity. The analysis models the expected energy distributions at the far detector and performs a standard χ2 test to estimate the required sensitivity to determine the mass hierarchy.
Presentation @ KIAS pheno group end year meeting: 2012.12.20Yoshitaro Takaesu
This document discusses the sensitivity of a future medium-baseline reactor neutrino experiment to determine the neutrino mass hierarchy.
1) For an exposure of 20 GW thermal power, 5 kiloton detector mass, and 5 years of running, an optimal baseline length of around 50 km is required. The energy resolution needs to be less than 3% statistical error and less than 1% systematic error.
2) With these parameters, the experiment could measure neutrino oscillation parameters with 0.5% level of accuracy.
3) The study provides the minimum requirements for the energy resolution to determine the mass hierarchy. More realistic studies must account for factors like the distribution of reactors within 100 km of the far detector.
This document discusses the sensitivity of future medium-baseline reactor neutrino experiments to determine the neutrino mass hierarchy. It finds that an optimal baseline length is around 50km, and an energy resolution with a statistical error component below 3% and systematic error below 1% is required to achieve a sensitivity of (Δχ2)min ≥ 9. With 20GW thermal power, 5kt detector mass, and 5 years of data, it is estimated that neutrino oscillation parameters can be measured with approximately 0.5% accuracy.
The document discusses using reactor neutrino experiments to determine the neutrino mass hierarchy. It finds that a baseline length of around 50km is optimal. An energy resolution with statistical error less than 3% and systematic error less than 1% is required to determine the mass hierarchy with reactor neutrinos. Specifically, a 20GW reactor with a 5 kiloton detector running for 5 years could achieve this resolution and determine the mass hierarchy.
Medium baseline reactor neutrino experiments have the potential to determine the neutrino mass hierarchy. An energy resolution of less than 3% is required to achieve good sensitivity for a 20GW, 5kt, 5 year experiment. The optimal baseline length is approximately 50km, and systematic uncertainties in the energy resolution should be less than 1% . Such an experiment could measure oscillation parameters with 0.5% accuracy and have a chance to determine the mass hierarchy at greater than 3σ significance.
This document discusses the sensitivity of a medium baseline reactor neutrino experiment to determine the neutrino mass hierarchy. It finds that:
1) For a 16.5GW reactor flux, 5kton detector, and 5 year run time, an optimal baseline length of around 50km could determine the mass hierarchy at over 2-sigma significance with an energy resolution of less than 3%.
2) The energy resolution is crucial, as a resolution of 3% reduces sensitivity by around 40% compared to 2%, and shortens the optimal baseline length by about 5km.
3) Interference from other reactor sites and cores significantly impacts the sensitivity, so their locations and flux distributions must be carefully considered in the experiment design.
1) Reactor neutrino experiments have the potential to determine the neutrino mass hierarchy independently of CP phase and matter effects using medium-baseline detectors.
2) The sensitivity of a RENO50-like experiment to determine the mass hierarchy depends on the energy resolution and interference from multiple reactor sites and cores.
3) With a 16.5GW reactor source, 10kt detector, and 5 years of data, energy resolutions of a<3% and b<0.5% would provide over 80% probability of a 2-sigma determination of the mass hierarchy, while a<2% and b<1% could achieve a 3-sigma determination.
The document discusses the sensitivity to the mass hierarchy determination of future reactor neutrino experiments. It finds that with a 16.5GW reactor complex, 18 kiloton detector, and 5 years of data, an energy resolution of less than 3% at 1 MeV plus 0.5% would be required for a greater than 3-sigma determination of the mass hierarchy. The energy resolution and reduction of uncertainties are important factors, as is accounting for interference among multiple reactor cores. Significant efforts are underway to determine the mass hierarchy with future reactor neutrino experiments.
This document discusses the sensitivity of future reactor neutrino experiments to determine the neutrino mass hierarchy. It finds that to achieve a greater than 3-sigma determination within 5 years, an energy resolution of less than 3% for the alpha term and less than 0.5% for the beta term is required. Interference among reactor cores can significantly impact the sensitivity. The experiment would also need an 18 kiloton detector located 50 km from a 16.5 gigawatt reactor complex. With such parameters, neutrino oscillation parameters could be measured to less than 1% accuracy.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
1. Yoshitaro
Takaesu
U.
of
Tokyo
LHC
limits
on
the
Higgs-‐portal
WIMPs
Based
on
arXiv:
1407.6882
in
collabora9on
with
M.
Endo
(U.Tokyo)
2. Portal
models
to
Hidden
Sector
2
Consider
another
world
where
par9cles
are
SM
singlets
(Hidden
Sector).
The
par9cles
interact
to
our
SM
world
through
Gravity.
Also,
they
may
interact
through…
DM
?
HL
FY
µ Xµ
1
fS
Fµ
˜Fµ
S
|H|2
S2
Neutrino
Portal
Vector
Portal
Axion
Portal
Higgs
Portal
Sterile
neutrino
Dark
Photon
Axion-‐like
par9cle
Higgs
invisible
decay
SM
Hidden
G
In
this
talk,
we
discuss
the
Higgs-‐portal
possibility.
3. Constraints
on
Higgs-‐portal
models
3
• Relic
abundance
• Direct
detec9on
• H
Invisible
decay
[Simone,
Giudice,
Strumia:
1402.6287]
Tight
constraints
on
Higgs-‐portal
“DM”.
4. Constraints
on
Higgs-‐portal
models
4
• Relic
abundance
• Direct
detec9on
• H
Invisible
decay
[Simone,
Giudice,
Strumia:
1402.6287]
Tight
constraints
on
Higgs-‐portal
“DM”.
?
If
not
the
DM…
5. Constraints
on
Higgs-‐portal
models
5
• Relic
abundance
• Direct
detec9on
• H
Invisible
decay
[Simone,
Giudice,
Strumia:
1402.6287]
Tight
constraints
on
Higgs-‐portal
“DM”.
?
Important
to
know
to
what
extent
LHC
can
explore
the
heavier
Higgs-‐portal
models.
Heavy
Higgs-‐portal
WIMP
search
Need
not
to
be
the
DM
Collider
search
6. Higgs-‐portal
models
discussed
6
Scalar
Vector
AnI-‐sym.
Tensor
WIMPs
are
SM
singlets.
parity
is
assumed
for
stability
of
WIMPs.
LS =
1
2
µ
S µS
1
2
M2
SS2
cS|H|2
S2
SS4
Z2
Details
of
these
spin-‐1
models
(UV
comple9on
etc.)
will
not
be
discussed
in
this
talk.
(cf.
Y.Farzan,
A.R.Akbarieh
(2012)
S.Beak,
P.Ko,
W.Park,
E.Senaha
(2013)
)
[A.
Djouadi
et
al.1205.3169,
S.Kanemura
et
al.1005.5651
]
[O.Cata,
A.
Ibarra:
1404.0432]
m2
B = M2
B + 4cBv2
m2
V = M2
V + 2cV v2
m2
S = M2
S + 2cSv2 aher
EWSB
LV =
1
4
V µ⌫
Vµ⌫ +
1
2
M2
V V µ
Vµ + cV |H|2
V µ
Vµ + · · ·
LB =
1
4
@ Bµ⌫
@ Bµ⌫
1
2
@µ
Bµ⌫@⇢B⇢⌫ 1
4
M2
BBµ⌫
Bµ⌫
cB|H|2
Bµ⌫
Bµ⌫ + · · ·
8. LHC
searches
considered
8
VBF
Mono
Z
Mono
jet
CMS:
1408.3583
(Signal
event
limit)
ATLAS:
PRD
90
012004
(2014)
(Fid.
Xsec
limit)
CMS:
EPJC
74
(2014)
2980
(Br_inv
limit)
• 8TeV
MET
Dark
Maler
search
• Cut-‐based
analysis
*
Cut
acceptances
are
es9mated
by
LO
events
with
MG5+pythia+Delphes.
*
Other
channels
such
as
l~+MET
may
have
comparable
sensi9vity,
but
will
not
be
considered
here.
NLO
QCD+EW
(HAWK)
NLO
QCD+EW
(HAWK)
NLO
QCD
(MCFM)
Channel:
Analysis:
Signal
Xsec:
Signal
process:
9. 0.1
0.2
0.5
1
2
5
10
50 100 150 200 250 300 350
cχ
mχ [GeV]
0.1
0.2
0.5
1
2
5
10
50 100 150 200 250 300 350
cχ
mχ [GeV]
0.1
0.2
0.5
1
2
5
10
50 100 150 200 250 300 350
cχ
mχ [GeV]
0.1
0.2
0.5
1
2
5
10
50 100 150 200 250 300 350
cχ
mχ [GeV]
0.1
0.2
0.5
1
2
5
10
50 100 150 200 250 300 350
cχ
mχ [GeV]
VBF
Mono-jet
Mono-Z
0.1
0.2
0.5
1
2
5
10
50 100 150 200 250 300 350
cχ
mχ [GeV]
Limits
for
the
Heavy
Higgs-‐portal
WIMPs
9
Vector
*
Limits
for
large
coupling
may
not
be
valid
due
to
unitarity
(or
break
down
of
perturba9ve
calcula9on),
depending
on
UV
models.
L =
1
4
Vµ⌫V µ⌫
+
1
2
m2
BVµV µ
+ cB|H|2
VµV µ
+ · · ·
Excluded
by
Higgs
invisible
decay
• VBF
sets
the
strongest
limits.
• Coupling
>
~0.5
can
be
constrained
by
8TeV
LHC.
13. Naive
projecIon
from
8TeV
-‐>
14TeV
13
We
need
to
know
and
to
es9mate
the
14
TeV
constraints
on
.
Nlim
sig
cc2
(m ) <
Nlim
sig
(m , c = 1)L
is
roughly
es9mated
as
following:
Nlim
sig
95%
CL
(simple
Gaussian)
Rela9ve
does
not
improve
Rela9ve
reduces
as
.
1/ NBG
is
es9mated
by
theore9cal
calcula9ons
with
experimental
cuts.
sys
stat stat
NBG 14TeV
=
N8TeV
BG
N14TeV
BG
stat
NBG 8TeV
Nlim
sig 2 tot
tot = 2
sys + 2
stat
sys
NBG 14TeV
=
sys
NBG 8TeV
Missing
ET
cut
@14TeV:
>
450
GeV
(Mono
Z),
>
400
GeV
(Mono
jet)
>
130
GeV
(VBF)
is
es9mated
by
simula9on
(MG5+pythia+Delphes).
N14TeV
BG /N8TeV
BG
16. Summary
16
LHC
constraints
on
the
Heavy
Higgs-‐portal
models
have
been
discussed.
8
TeV
LHC
can
constrain
Higgs-‐portal
couplings
below
1
for
the
vector
and
tensor
case.
Limit
on
Scalar
model
is
very
weak.
14
TeV
LHC
can
be
sensi9ve
to
O(0.1)
couplings
of
vector
and
tensor
models.
Limit
on
Scalar
model
is
s9ll
very
weak.
VBF
channel
puts
stronger
limits
on
Higgs-‐portal
models
than
Mono-‐jet
and
Mono-‐Z
channels.