1) The document discusses allocation of equipment in a multi-stage manufacturing process where multiple equipment are used at each stage to minimize wait times between stages.
2) It presents a linear model to estimate the effects of main equipment and interactions between adjacent stages on final product quality. Fractional factorial designs are used to analyze the model with multiple factors of mixed levels.
3) As an example, a six-stage process with two two-level and three three-level factors is examined. A resolution III fractional factorial design is constructed by taking the product of two-level and three-level fractional designs to estimate all main and two-way interaction effects between adjacent stages.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
LINE CALL REDUCTION USING PROCESS IMPROVEMENT IN AN AUTOMOBILE COMPANY: A SIM...Michael George
The aim of this paper is to decrease the chances of inventory going below the set minimum by eliminating the factors causing shortages and delays. When inventory goes below set minimum it is then called as line call, where the customer (Set Part Supply Department) demands immediate supply of those product undergoing shortage. The aim is to reduce the number of line calls by eliminating root causes for line calls. The reasons for line calls will be identified using root cause analysis and after studying the causes, counter measures for each cause will be identified and implemented. Some of these measures involve in changing processes and adding new equipment to reduce overall work and some involve statistically finding the problem areas in the data set. After all the two counter measures were implemented, we were able to reduce 85% of overall line calls leading to a large saving to the plant in terms of equipment area and man power. Also with the implementation of this concept, work burden to employees is reduced, thus creating an enjoyable work place.
SIMULATION AND COMPARISON ANALYSIS OF DUE DATE ASSIGNMENT METHODS USING SCHED...IJCSES Journal
This paper presents a simulation and comparison analysis conducted to investigate the due-date
assignment methods through various scheduling rules. The due date assignment methods investigated are
flow time due date (FTDD) and total work content (TWK) method. Three scheduling rules are integrated in
the simulation for scheduling of jobs on machines. The performance of the study is evaluated based on the
configuration system of Hibret manufacturing and machine building Industry, subsidiary company of
Metals and Engineering Corporation were thoroughly considered. The performance of the system is
evaluated based on maximum tardiness, number of tardy jobs and total weighted tardiness. Simulation
experiments are carried in different scenarios through combining due-date assignment methods and
scheduling rules. A two factor Analysis of variance of the experiment result is performed to identify the
effect of due-date assignment methods and scheduling rules on the performance of the job shop system. The
least significant difference (LSD) method was used for performing comparisons in order to determine
which means differ from the other. The finding of the study reveals that FTDD methods gives less mean
value compared to TWK when evaluated by the three scheduling rules.
Industrial automation for roto moulding plant usingMehul Joshie
Automation systems have been used in almost all
industries of food, automotive, textile, machinery manufacturing,
agriculture and etc. In this research paper, we are presenting a
process loop which can be used in roto moulding process. That has
made the process automatic. Because human interruptions can lead
to wastage of material, lower quality and increase chances of
accidents. PLC based programming can be useful for making process
in a sequence and time base.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
LINE CALL REDUCTION USING PROCESS IMPROVEMENT IN AN AUTOMOBILE COMPANY: A SIM...Michael George
The aim of this paper is to decrease the chances of inventory going below the set minimum by eliminating the factors causing shortages and delays. When inventory goes below set minimum it is then called as line call, where the customer (Set Part Supply Department) demands immediate supply of those product undergoing shortage. The aim is to reduce the number of line calls by eliminating root causes for line calls. The reasons for line calls will be identified using root cause analysis and after studying the causes, counter measures for each cause will be identified and implemented. Some of these measures involve in changing processes and adding new equipment to reduce overall work and some involve statistically finding the problem areas in the data set. After all the two counter measures were implemented, we were able to reduce 85% of overall line calls leading to a large saving to the plant in terms of equipment area and man power. Also with the implementation of this concept, work burden to employees is reduced, thus creating an enjoyable work place.
SIMULATION AND COMPARISON ANALYSIS OF DUE DATE ASSIGNMENT METHODS USING SCHED...IJCSES Journal
This paper presents a simulation and comparison analysis conducted to investigate the due-date
assignment methods through various scheduling rules. The due date assignment methods investigated are
flow time due date (FTDD) and total work content (TWK) method. Three scheduling rules are integrated in
the simulation for scheduling of jobs on machines. The performance of the study is evaluated based on the
configuration system of Hibret manufacturing and machine building Industry, subsidiary company of
Metals and Engineering Corporation were thoroughly considered. The performance of the system is
evaluated based on maximum tardiness, number of tardy jobs and total weighted tardiness. Simulation
experiments are carried in different scenarios through combining due-date assignment methods and
scheduling rules. A two factor Analysis of variance of the experiment result is performed to identify the
effect of due-date assignment methods and scheduling rules on the performance of the job shop system. The
least significant difference (LSD) method was used for performing comparisons in order to determine
which means differ from the other. The finding of the study reveals that FTDD methods gives less mean
value compared to TWK when evaluated by the three scheduling rules.
Industrial automation for roto moulding plant usingMehul Joshie
Automation systems have been used in almost all
industries of food, automotive, textile, machinery manufacturing,
agriculture and etc. In this research paper, we are presenting a
process loop which can be used in roto moulding process. That has
made the process automatic. Because human interruptions can lead
to wastage of material, lower quality and increase chances of
accidents. PLC based programming can be useful for making process
in a sequence and time base.
Application of value stream mapping tool to reduce wastes in bearing industryijmech
In today’s highly competitive business environment, companies require improvement in Production Lead
Times, costs and customer service levels to survive. Because of this, companies have become more
customers focused. The result is that companies have been putting in significant effort to improve their
efficiency.
This paper present the practical application of Value Stream Mapping (VSM) tool implement in a bearing
industry. A value stream is an assortment of all actions (value added as well as non-value added) that are
required to bring a product through the essential flows, starting with raw material and ending with the
customer. For drawing current state value stream mapping, all relevant data has been collected and analyzed. Then analysis of current state map has been done for identifying non-value adding activities, in
other words waste and suggestions on how to remove or reduce different wastes. From the results achieved
by current VSM, it was observable that the two processes Annealing and CNC Machining have higher cycle
time and WIP. In order to increase their capacity, their cycle should be reduced.
By implementing some lean principles and changes in the production line, the future state map was created and the reduction the different types of wastes reduced. The total processing time was reduced from 409
seconds to 344 seconds.
Improving Supply Chain Activity using Simulationijsrd.com
The discovery through computational modeling and simulation has become the third pillar of science, alongside theory and experimentation. As computational power increases, simulation has gained in importance and has become a major research area, where highly parallel computation is utilized. In this dissertation, we have performed the simulation by selecting a single machine which is involved in manufacturing the highest number of products. Data are collected for all the processes involved in the manufacturing processes and an input modelling analysis is been done for the data collected. After the analysis is completed, a simulation model is constructed using ARENA which involved all the manufacturing process using the simulation tools. With the help of the simulation tools we will be able to identify activities causing the bottlenecks and delays in the entire manufacturing processes. Similarly, this simulation can be carried out for each and every machine of the company so that we can identify the bottlenecks and delays. As a result, the bottlenecks and delays can be reduced and the entire supply chain can be improved. This paper aims at combining supply chain management and simulation - to give an overview of both areas and shows how supply chain management can profit from simulation and also to identify the delays and bottlenecks in the overall manufacturing process. Lastly, a sample of how a supply chain can be optimized, in the simulation development suite ARENA.
Simulation of mixed-load testing process in an electronic manufacturing companyTELKOMNIKA JOURNAL
The automatic testing machine, called by mixed-load tester, has ability to load and test multiple product families in different testing durations simultaneously. However, the high product mixes for each product family undergoes a different process flow. In addition, the capability of the robot inside tester used for loading and unloading a product to each slot makes the capacity planning problem more complicated. It effects low tester utilization. This paper developed simulation models of capacity planning scenarios under demand and testing time uncertainty. These scenarios are built by robust optimization to handle worst case condition. The result shows the proposed solutions gives better tester utilization and improves the decision making process by providing more detailed and precise information about capacity planning under uncertainties that was not available in company`s current method. To the best of our knowledge, this developed model is the first one considering the mixed–load tester under uncertainties.
The Performance Analysis of a Fettling Shop Using SimulationIOSR Journals
Fettling shop is the product finishing shop of casting products.After the knockout, the casting is taken
to the fettling shop for doing the fettling work. The fettling process includes cutting, shot blasting, grinding and
painting. In all these process the sand and extra metal on the castings are removed. The project titled „The
performance analysis of a fettling shop using simulation‟ is based on a fettling shop of a casting industry. The
main aim of the project is the performance analysis of the fettling shop. This project is a simulation based
project and is done using a simulation tool called arena. The main concepts related with the performance
analysis are Bottleneck analysis, Productivity analysis and System improvement analysis.
Use of Shainin Design of Experiments to Reduce the Tripping Force of an Air C...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
Application of value stream mapping tool to reduce wastes in bearing industryijmech
In today’s highly competitive business environment, companies require improvement in Production Lead
Times, costs and customer service levels to survive. Because of this, companies have become more
customers focused. The result is that companies have been putting in significant effort to improve their
efficiency.
This paper present the practical application of Value Stream Mapping (VSM) tool implement in a bearing
industry. A value stream is an assortment of all actions (value added as well as non-value added) that are
required to bring a product through the essential flows, starting with raw material and ending with the
customer. For drawing current state value stream mapping, all relevant data has been collected and analyzed. Then analysis of current state map has been done for identifying non-value adding activities, in
other words waste and suggestions on how to remove or reduce different wastes. From the results achieved
by current VSM, it was observable that the two processes Annealing and CNC Machining have higher cycle
time and WIP. In order to increase their capacity, their cycle should be reduced.
By implementing some lean principles and changes in the production line, the future state map was created and the reduction the different types of wastes reduced. The total processing time was reduced from 409
seconds to 344 seconds.
Improving Supply Chain Activity using Simulationijsrd.com
The discovery through computational modeling and simulation has become the third pillar of science, alongside theory and experimentation. As computational power increases, simulation has gained in importance and has become a major research area, where highly parallel computation is utilized. In this dissertation, we have performed the simulation by selecting a single machine which is involved in manufacturing the highest number of products. Data are collected for all the processes involved in the manufacturing processes and an input modelling analysis is been done for the data collected. After the analysis is completed, a simulation model is constructed using ARENA which involved all the manufacturing process using the simulation tools. With the help of the simulation tools we will be able to identify activities causing the bottlenecks and delays in the entire manufacturing processes. Similarly, this simulation can be carried out for each and every machine of the company so that we can identify the bottlenecks and delays. As a result, the bottlenecks and delays can be reduced and the entire supply chain can be improved. This paper aims at combining supply chain management and simulation - to give an overview of both areas and shows how supply chain management can profit from simulation and also to identify the delays and bottlenecks in the overall manufacturing process. Lastly, a sample of how a supply chain can be optimized, in the simulation development suite ARENA.
Simulation of mixed-load testing process in an electronic manufacturing companyTELKOMNIKA JOURNAL
The automatic testing machine, called by mixed-load tester, has ability to load and test multiple product families in different testing durations simultaneously. However, the high product mixes for each product family undergoes a different process flow. In addition, the capability of the robot inside tester used for loading and unloading a product to each slot makes the capacity planning problem more complicated. It effects low tester utilization. This paper developed simulation models of capacity planning scenarios under demand and testing time uncertainty. These scenarios are built by robust optimization to handle worst case condition. The result shows the proposed solutions gives better tester utilization and improves the decision making process by providing more detailed and precise information about capacity planning under uncertainties that was not available in company`s current method. To the best of our knowledge, this developed model is the first one considering the mixed–load tester under uncertainties.
The Performance Analysis of a Fettling Shop Using SimulationIOSR Journals
Fettling shop is the product finishing shop of casting products.After the knockout, the casting is taken
to the fettling shop for doing the fettling work. The fettling process includes cutting, shot blasting, grinding and
painting. In all these process the sand and extra metal on the castings are removed. The project titled „The
performance analysis of a fettling shop using simulation‟ is based on a fettling shop of a casting industry. The
main aim of the project is the performance analysis of the fettling shop. This project is a simulation based
project and is done using a simulation tool called arena. The main concepts related with the performance
analysis are Bottleneck analysis, Productivity analysis and System improvement analysis.
Use of Shainin Design of Experiments to Reduce the Tripping Force of an Air C...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
2. Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375 367
Fig. 1. The flow of three batches for case 1.
Fig. 2. The flow of three batches for case 2.
Besides the SPC, another problem occurring in the multi-stage process is the allocation of equipment at each stage of the
whole process. Equipment in a stage is defined here as a tangible property that is used to accomplish the same operations
of the manufacturing. Examples of equipment in a stage of a multi-stage process include devices, machines, and tools used
for manufacturing in each stage.
In most of the multi-stage processes, there are multiple of equipment at most of the stages, and the raw materials are
fed to each stage by the dispatching rule. The dispatching rule of the manufacturing process is to keep operating the process
by minimizing the dead time, the waiting time between unit processes. Such dispatching rule makes the raw materials flow
smoothly through the whole process, and thus maximize the productivity. Consider a multi-stage process with three batch
processes (say, A, B, and C) whose lead times, times required to pass through the process, are 20, 60, and 40 min, respectively.
Consider case 1 where there is only a single equipment at each process. The flow of three batches for case 1 is described
in Fig. 1. In case 1, batch 1 passed through process A can be put into process B at 20 min, and put into process C at 1 h and
20 min without any waiting time. On the other hand, batch 2 will be out from process A at 40 min, but should wait 40 min to
be put into process B. Moreover, batch 3 should wait 80 min to be put into process B. The total lead time of case 1 is 240 min.
Consider case 2 where there are 1, 3, and 2 equipment for process A, B, and C, respectively. Then the flow of three batches
for case 2 is described in Fig. 2. In case 2, it can be easily seen that there is no waiting time between processes and the total
lead time is only 160 min.
As an example of the multi-stage process with multiple equipment, the layer process of the PCB manufacturing process
can be considered. In the layer process, four processing units, such as PCB preprocessing process, D/F lamination process,
D/F exposure, and DES process (circuit development, PCB etching, D/F stripping), are connected in serial. Equipment for the
layer process are typically very expensive, for example, D/F exposure machine costs around couple of million dollars or more.
Despite the high price, several equipment are provided to satisfy the dispatching rule. Fig. 3 shows the operation of multi-
equipment in the layer process. In Fig. 3, there is a single equipment for preprocessing and lamination process, while there
are three and two equipment for exposure process and DES process, respectively. All the final products, the manufactured
PCBs, are inspected and accessed as pass (conforming) and fail (nonconforming) through several inspection procedures such
as AOI (automatic optical inspection) and E-check, and then the yield of the manufacturing process is calculated.
If some equipment is occupied at a certain stage during the material flow, then one of the remaining equipment can
be used for the operation of the stage. An unexpected problem occurred in such multiple equipment situations is that
equipment between stages may have significantly related effects to the quality of the final product. That is, some specific
allocations of equipment produce products with better quality than the other allocations of equipment. It has been reported
by engineers that certain equipment combinations produce high yield, whereas some others low yield. Such harmonious
and disharmonious equipment combinations can be revealed through the study of the equipment allocation. Although
the optimal equipment combination is most preferable, the equipment corresponding to the current stage of the optimal
equipment combination may not be available because it is still being operated for previously allocated materials. In such
cases, the subsequent best equipment combinations will be considered. For such allocation of subsequent equipment
combinations, all the possible equipment combination will be ordered according to the estimated yield. Then the materials
3. 368 Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375
Fig. 3. Flow of equipment path for layer process in the PCB manufacturing process.
at the current stage will be fed to the next best available equipment considering the equipment passed through up to the
previous stages. An allocation of equipment for the whole multi-stage process is called an equipment path. Hence, the task
for allocation of equipment is trying to use some harmonious sets of equipment for production and not to use disharmonious
sets of equipment. In fact, such problem has been a long and annoying problem to engineers, but they just relied on their
own personal experience or only the main effect of equipment without considering the related effects by adjacent processes.
In Section 3, a case study is developed. In Section 4, approaches for the missing value problem are explained. In Section 5,
a case study is followed by the simulation results for three possible cases of design.
2. Design and analysis for equipment path
Suppose that there is a k-stage process. Then the final product quality can be expressed as a linear function of stage effects
and inter-stage effects. The stage effect implies the main effect and the inter-stage effect implies the interaction effect of
two or more than two stages. It is often true that higher order interaction effects tend to become negligible and can properly
be disregarded. For the parsimony of the response model, only the main effects and the two-way interaction effect for the
two adjacent stages are considered to be possible significant effects. That is, interaction effects for more than two adjacent
stages are not significant to the final product quality. Let the ordered set of equipment be (i1, i2, . . . , ik), that is, equipment
il is used at stage l for l = 1, 2, . . . , k. Then the final product quality can be expressed as the response of a linear model with
the main effects and the two-way interaction effects, that is,
yi1i2···ikj = µ +
k
l=1
βil
+
k−1
l=1
βil,il+1
+ εi1i2···ikj (1)
where µ denotes the overall mean, βil
the main effect of equipment il at stage l, βil,il+1
the interaction effect of equipment il
and il+1 used in stages l and l + 1, respectively, and j the replication for the specific equipment path. It is assumed that the
error term ε follows a normal distribution with mean zero and variance σ2
ε .
In a k-stage process, each stage works as a factor and each equipment in a stage as a level of the factor. Thus the model
in Eq. (1) can be treated as a linear model with k factors with mixed levels. Since the diversity of number of levels is not
preferred in the analysis of linear models, it is assumed that there are only two or three levels at each stage. Actually there
can be more than three equipment, but they can be classified as two or three groups of equipment according to similarity
of their mechanical properties. Then equipment in a group are treated as the same level. The number of runs required by a
full three-level factorial design 3k
increases geometrically as k is increased. Running a 3−f
fraction of a 3k
design is called a
three-level fractional factorial design 3k−f
. In general, a three-level fractional factorial design 3k−f
with estimable effects of
interest could be used. To assign a two-level factor A to a 3k−f
design, the dummy level technique is used. A two-level factor
A is formally turned into a three-level factor by repeating one of the levels. The third level of A is a duplicate of one of the
actual two levels, which could be chosen as the more frequently used equipment.
Now, a special case is considered. Suppose that the number of equipment at two adjacent stages are different and let the
numbers of two-level factors and three-level factors be k∗
. Since there are many factors with two or three levels, fractional
factorial design should be used for the analysis of the linear model. We are interested in how to get a fractional factorial
design with those mixed-level factors where all the main effects and two-way interaction effects between the two adjacent
stages are estimable.
A design of the resolution III does not confound main effects with another but does confound main effects with two-
factor interactions. Since main effects are interested, we choose the resolution III design with smallest fraction for the sake
of small number of experiments. Also let 2
k∗−f
III and 3
k∗−f
III denote the resolution III two-level fractional factorial design with
fraction 1/2f
and the resolution III three-level fractional factorial deign with fraction1/3f
, respectively. Then the product
4. Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375 369
Table 1
Number of equipment in a six-stage manufacturing process.
Stage (factor) A B C D E F
Equipment (level) 2 3 2 3 2 3
Table 2
The 23−1
III × 33−1
III product design.
23−1
III × 33−1
III = 23−1
III × 33−1
III
A C E B D F A B C D E F
1 1 1 1 1 1 1 1 1 1 1 1
1 2 2 1 2 2 1 1 1 2 1 2
2 1 2 1 3 3 ...
2 2 1 2 1 2 1 3 1 3 1 2
2 2 3 1 1 2 1 2 1
2 3 1 1 1 2 2 2 2
3 1 3 ...
3 2 1 1 3 2 3 2 2
3 3 2 ...
2 3 2 3 1 2
design Dk = 2
k∗−f
III ×3
k∗−f
III where all runs of a 3
k∗−f
III are conducted at each run of a 2
k∗−f
III is used for the analysis of the model
in order to make two-way interaction effects for the two adjacent stages estimable.
Since the data are obtained from the manufacturing history of the multi-stage process, the number of replication varies
drastically depending on the equipment path. Certain equipment paths will have a large number of replication, while some
others will have a small number. Moreover, some equipment path will have no observation, which corresponds to the
missing case. The average response can be expressed from Eq. (1) as
¯yi1i2···ik· = µ +
k
l=1
βil
+
k−1
l=1
βil,il+1
+ ¯εi1i2···ik·. (2)
The variance of the average error ¯ε is σ2
ε /r for the number of replications, r.
3. A case study
Suppose that a multi-stage process has three two-level factors and three three-level factors given in Table 1. Here, the
number of stage corresponds to the number of factors and the number of equipment in each stage corresponds to the number
of levels in each factor.
Equipment path implies the allocation of specific equipment combination that the product materials pass through during
the whole manufacturing process. The data used for the analysis are the equipment paths and the corresponding responses.
It is of interest to find the equipment path for the optimum response as well as the subsequently sorted equipment path in
the order of responses.
The data to be analyzed are mostly historical, not experimental. The historical data contains the quality characteristic y
and all the information about the equipment paths through which materials have passed. It is assumed that the response
is affected by all the main effects and two-way interaction effects between the two adjacent stages. The historical data is
sorted according to the equipment paths. It suffices to get the average response ¯y and the number of replications for each
equipment path. Since the number of replications in each path are different in general, the heteroscedasticity of ¯y should be
considered and thus, the weighted least squares method is used in estimating the effects. Since the data are obtained from
the manufacturing history of the multi-stage process, some equipment paths corresponding to design points in 23
× 33
factorial design have no observation, which corresponds to the missing cases.
In order to detect the significant effects in model (2), we may use all historical data or part of them corresponding to a
fractional factorial design to maintain the balance of the design points.
A practical method is discussed on how to get a fractional factorial design with those mixed-level factors where all the
main effects and two-way interaction effects between the two adjacent stages are estimable.
First, find the product design between two-level fractional factorial design 23−1
III and three-level fractional factorial design
33−1
III . Then all the two way interaction effects between two level factors and three level factors are estimable. Thus all the
two-way interaction effects between the two adjacent stages are estimable. Table 2 shows the product design between 23−1
III
with defining contrast ACE and 33−1
III with defining contrast BDF2
. It can be easily checked that at each run of 23−1
III all nine
runs of 33−1
III are conducted.
This idea is implemented in the computation algorithm in R program developed by Isaac Newton Institute in order to
construct fractional or block factorial designs in mixed levels (Monod, Bouvier, & Kobilinsky, 2013). In R-program, inputs are
5. 370 Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375
Table 3
Size of effects (multiple of σε) in the simulation model.
Level/factor ai bj dk el a1bj a2bj
1 −1 −2 −1.5 −1 −3 3
2 1 0 0 1 0 0
3 – 2 1.5 – 3 −3
Overall mean effect µ = 65.
the factors with number of levels, the model with interested effects, and the unit size (see Appendix A.1). Then the output
from the R-program is a relevant experimental design.
By analyzing the historical data or part of them, it is expected to detect the vital effects for the response. The popular
graphical method to identify vital effects is to use the half normal probability plot by decomposing the effects with more than
two degrees of freedom into the effects with single degree of freedom. In order to eliminate the subjectivity of the graphical
procedures for detecting significant effects, many quantitative procedures have been proposed in the recent literature. In
this article, the vital effects are selected based on the LGB method by Lawson, Grimshaw, and Burt (1998). They suggested
a hybrid method based on the half normal probability plot, which is a blend of Lenth’s (1989) and Loh’s (1992) methods. It
is known that LGB method is uniformly more powerful than Lenth’s. The method consists of fitting a simple least squares
line and calculating prediction limits to the half normal probability plot. The effects fall outside the prediction limits are
significant.
The proper model for the quality characteristic y being found, it is straightforward to order all the possible equipment
paths according to the predicted responses. In order to shorten the cycle time and increase the productivity of the
manufacturing process, it is not a good way to insist the optimal equipment path since the equipment designated by the
optimal equipment path is often occupied for processing previous materials. Thus it is essential to find the subsequently
optimal equipment paths. Such ordering of equipment paths can be found by evaluating the predicted responses.
4. Missing value treatment
If all the design points in the factorial design or fractional factorial design have at least one observation, then the design
has no missing value. Otherwise, the design points with no observation correspond to missing value cases. Missing value
cases occur often in historical data, and thus the remedy for such cases should be prepared. There are two general approaches
to the missing value problem. The first is the exact analysis where the model is estimated based on the unbalanced data.
The second is the approximate analysis where the missing observations are estimated by the predicted value of the quality
characteristic y and then, their predicted values are used as imputed values. Then we proceed with the analysis as usual just
as if the imputed observations are real data. The first approach reduces the number of design points used in the analysis and
the second approach maintains the balance of the design points.
5. A simulation study
Consider a six-stage manufacturing process whose number of equipment is given by Table 1. A set of historical data is
generated by simulation for the six-stage process. Out of 23
× 33
factorial design points we randomly select one third of
them to construct a historical data.
Without loss of generality, a large value of the response is preferred. It is assumed that four main effects A, B, D, E and
one two-way interaction effect A ×B are significant to the response. Since the sample average of the quality characteristic y
is calculated for given equipment paths, the error terms for that sample average are assumed to follow N(0, 1/nijkl) for the
number of replications nijkl. The number of replication in each equipment path is generated by the discrete uniform distri-
bution between 1 and 10. Thus the simulation model for the generation of the sample average of the quality characteristic
y is given as the following;
¯yijkl = µ + ai + bj + (ab)ij + dk + el + ¯εijkl, i, l = 1, 2 j, k = 1, 2, 3 (3)
where ¯εijkl ∼ N(0, σ2
ε /nijkl), and nijkl ∼ discrete Uniform {1, 2, . . . , 10}. The size of effects in Eq. (3) is given in Table 3. It
can be easily checked that the optimal equipment path is A1B3D3E2 at which the mean response is 71.5.
In order to screen the vital few effects from the assumed model (2) and then find the proper model, we consider three
cases: using all the historical data, using only the part of the historical data corresponding to the fractional factorial design
and using the pilot experimental data.
5.1. Using all the historical data
For the historical data, a data set Historical_72 is generated by the model in Eq. (3) and the vital effects are detected using
half normal probability plot based on the LGB method. Then, all the possible equipment paths are sorted in the order of the
predicted responses.
6. Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375 371
Fig. 4. Half normal probability plot based on the LGB method.
Table 4
The estimates of the screened effects, A, B, D, E and AxB.
Level/factor ˆai
ˆbj
ˆdk ˆel
a1bj
a2bj
1 −0.9811 −1.9886 −1.4597 −1.0831 −3.0545 3.0545
2 0.9811 −0.0264 −0.0119 1.0831 0.0980 −0.0980
3 – 2.0150 1.4716 – 2.9565 −2.9565
Overall mean effect ˆµ = 64.9238.
To draw the half normal probability plot, treatment effects in the model need to be estimated and then, those effects with
more than one degrees of freedom to be decomposed into the effects with single degree of freedom. The GLM procedure in R
performs this process by projecting the data onto the successive orthogonal subspaces generated by the QR decomposition.
To screen the significant effects, those effects are inserted into the LGB function, which is implemented in package ‘daewr’
developed by John Lawson (2012) in order to obtain the LGB test statistic. The R code is presented in Appendix A.2.
One case of the half normal plots is shown in Fig. 4. In the figure, it is shown by the LGB method that effects A1B1, B1,
D1, E1, A1, D2 and B2 are significant. Thus, all true effects, A, B, D, E and AxB are detected in this case.
It is proposed that the hierarchy of effects are corrected automatically in screening the significant effects, which means
that relevant main effects are included in the model if interaction effects are screened (see Appendix A.3 for the R code.).
Table 4 shows the estimates of the screened effects, A, B, D, E and AxB. It can be easily checked that the optimal equipment
path is A1B3D3E2 with the predicted response, 71.47. Also, the second best equipment path is A1B3D2E2, with the predicted
response, 69.99. The equipment paths are sorted in decreasing order of the predicted response. The sorted results are
given in Table 5. This information can be used in deciding which equipment at the next stage should be assigned to the
materials.
From Table 5, it is seen that the optimal equipment path is chosen as A1B3D3E2. If equipment D3 is occupied at stage 4,
then the next optimal path is A1B3D2E2 in Table 5. This simulation process is repeated 10,000 times. The case of variance of
errors being 1/nijkl is called the baseline case. Also two other cases of different size of the variance of error are considered,
which are 0.25/nijkl for the precise error and 9/nijkl for the noisy error.
In assessing the performance of the proposed method, let PT denote the proportion of detecting all true effects, A, B, D, E
and AxB and let PO denote the proportion of detecting optimal path A1B3D3E2. The simulation results for the three cases are
summarized in Table 6(a).
PT for the baseline case is 0.9980. Even though all the true effects are screened, the proportion of detecting one more false
effect and two more false effects in addition to the true effects are 0.4124 and 0.5012, respectively. A half normal probability
plot for the former case is given in Fig. 5. Main effect F is the smallest significant effect detected by LGB method, which may
not be screened by eyeballs checking of a half normal probability plot. So there is a good chance of reducing the proportion of
detecting one false effect further. Also PO for the baseline case is 0.9980. The proportion of the designed optimal path being
the true optimal path A1B3D3E2 is 0.0708 and the proportion that the designed optimal path identify two indifferent stages
in addition to the true path A1B3D3E2 is 0.5106. Also PO for the precise error case is 0.9996, but the proportion of identifying
7. 372 Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375
Table 5
The equipment paths in decreasing order of the predicted response.
A B D E ˆy
1 3 3 2 71.47
1 3 2 2 69.99
2 1 3 2 69.53
1 3 3 1 69.30
1 3 1 2 68.54
2 2 3 2 68.34
2 1 2 2 68.04
1 3 2 1 67.82
2 3 3 2 67.52
2 1 3 1 67.36
2 2 2 2 66.85
2 1 1 2 66.59
1 2 3 2 66.57
1 3 1 1 66.37
2 2 3 1 66.17
2 3 2 2 66.03
2 1 2 1 65.88
2 2 1 2 65.40
2 3 3 1 65.35
1 2 2 2 65.09
2 2 2 1 64.69
2 3 1 2 64.59
2 1 1 1 64.43
1 2 3 1 64.40
2 3 2 1 63.87
1 2 1 2 63.64
2 2 1 1 63.24
1 2 2 1 62.92
2 3 1 1 62.42
1 2 1 1 61.47
1 1 3 2 61.45
1 1 2 2 59.97
1 1 3 1 59.29
1 1 1 2 58.52
1 1 2 1 57.80
1 1 1 1 56.36
Fig. 5. A half normal probability plot in the case of detecting one more false effect.
false effects are even higher, for example the proportion that the designed optimal path identify two indifferent stages in
addition to the true path A1B3D3E2 is 0.7682. The efficiency of identifying true effects and optimal path gets a little worse
8. Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375 373
Table 6
Simulation results on the efficiency of the proposed strategy.
Precise error Baseline Noisy error
(a) Simulation results using all the historical data
Historical_72
PT 0.9996 0.9980 0.9220
No false 0.0092 0.0707 0.5428
One false effect 0.2203 0.4124 0.3130
Two false effects 0.7544 0.5012 0.0605
PO 0.9996 0.9980 0.9211
True optimal path 0.0092 0.0708 0.5443
One indifferent stage 0.2222 0.4166 0.3252
Two indifferent stages 0.7682 0.5106 0.0516
(b) Simulation results using only the historical data corresponding to the fractional factorial design
Historical_FFD (imputed)
PT 0.9884 0.9696 0.7027
No false 0.0094 0.0677 0.2966
One false effect 0.2056 0.3597 0.2863
Two false effects 0.7577 0.5270 0.1066
PO 0.9884 0.9696 0.6826
True optimal path 0.0095 0.0680 0.2886
One indifferent stage 0.2066 0.3616 0.2888
Two indifferent stages 0.7723 0.5400 0.1052
(c) Simulation results using the pilot experimental data
Random_36
PT 0.9631 0.9049 0.4564
No false 0.0174 0.1036 0.2966
One false effect 0.2638 0.4127 0.1376
Two false effects 0.6792 0.3865 0.0205
PO 0.9631 0.9048 0.4464
True optimal path 0.0174 0.1036 0.2919
One indifferent stage 0.2645 0.4134 0.1373
Two indifferent stages 0.6812 0.3878 0.0172
FFD_36
PT 1.0000 0.9990 0.7117
No false 0.1137 0.4637 0.6576
One false effect 0.4841 0.4275 0.0428
Two false effects 0.4004 0.1056 0.0100
PO 1.0000 0.9990 0.7068
True optimal path 0.1138 0.4639 0.6560
One indifferent stage 0.4850 0.4307 0.0492
Two indifferent stages 0.4012 0.1044 0.0016
in the noisy error case. PT and PO in the noisy error case are 0.9220 and 0.9211, respectively. Surprisingly, the proportion of
the designed optimal path being the true optimal path is 0.5443 and the proportion that the designed optimal path identify
two indifferent stages in addition to the true path is 0.0516. Even though the efficiency of screening all true effects and
identifying the true optimal path for the noisy error case is not as good as the baseline and precise error cases, the efficiency
in identifying few false effects is best in the case of noisy errors.
Traditionally, engineers have used to sort the equipments path by the decreasing order of the mean responses and the
equipments path corresponding to the maximum response is the candidate for the optimal equipments path. PO for this
method in the precise error, the baseline, and the noisy error cases are 0.9142, 0.8929, and 0.5318, respectively. PO for
the proposed method in those three cases are 0.9996, 0.9980, and 0.9211, respectively from Table 6(a). Thus the proposed
method is doing well for identifying true optimal path especially in the noisy error case.
5.2. Using only the historical data corresponding to the fractional factorial design
As discussed in Section 3, the product design between 23−1
III with defining contrast being ACE and 33−1
III with defining
contrast being BDF2
is generated. Only historical data whose equipment paths corresponding to the product design points in
Table 2 are used and the rest of them are disregarded. Since one third of 23
×33
factorial design points are selected randomly
to construct a historical data, there exist about twenty-four missing observations in the product design. The approximate
analysis is adopted where the missing observations are estimated by the predicted value of the quality characteristic y
based on all the historical data and then, their predicted values are used as imputed values. The same analysis has been
done as in Section 5.1 to find PO and PT in the precise error, the baseline, and noisy error cases. The simulation results for
the three cases are summarized in Table 6(b). PO in those three cases are 0.9696, 0.9884 and 0.6826. Thus, the efficiency of
screening all true effects and identifying the true optimal path with part of the historical data is not as good as that with all the
historical data. Especially that is worse in the nosy error case, where it is important to have more data to predict the response
well.
9. 374 Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375
5.3. Using the pilot experimental data
Suppose it is possible to use pilot experiments. A set of data generated by the model in Eq. (3) at the product design
points is called FFD_36 and that at randomly selected thirty six factorial design points is called Random_36. The simulation
results in the precise error, the baseline, and the noisy error cases are summarized in Table 6(c). It is interesting to check
the efficiency of screening all true effects and identifying the true optimal path with FFD_36, Random_36 and Historical_72.
Even though the size of FFD_36 is the half of Historical_72, PO with FFD_36 is a little better than PO with Historical_72 in the
precise error and the baseline cases. It is interesting to note that the proportion of the designed optimal path being the true
optimal path with FFD_36 in the baseline case is 0.4639. On the other hand that with Historical_72 is 0.0708. In the noisy
error case, PO with FFD_36 is 0.7068 and PO with Historical_72 is 0.9211, which implies that taking more data is critical in
getting higher efficiency. Note that PO with FFD_36 is much better than PO with Random_36 regardless of size of errors.
6. Conclusions and further studies
The allocation of equipment in a multi-stage process is discussed in this article. Usually, there are multiple equipment in
each stage of the multi-stage process to provide the smooth flow of the materials through the process. When multiple
equipment are available at a certain stage, a specific equipment should be chosen among them. Also when a specific
equipment is occupied at a certain stage, the next best available equipment should be chosen. Such preparations for the
smooth flow of the product material make the production process efficient, and subsequently make the production cost low.
In the multi-stage process, it is assumed that main effects and two-way interaction effects for the two adjacent stages are
significant. The efficient allocation problem for the multi-stage process for a given historical data is solved by the general
linear model approach, and then the predicted responses are ordered to choose the subsequently optimal equipment paths.
The effectiveness of the proposed allocation strategy is evaluated in terms of the probabilities for detecting all true effects
and detecting optimal equipment path for three cases of precisions. It turns out that noisy error case is less efficient than
the others. When it is possible to use pilot experiments, the efficiency of the product design of orthogonal arrays especially
for 2
k∗−f
III and 3
k∗−f
III fractional factorial designs is compared to that of the random selection of factorial design points. It is
shown that the former is more efficient than the latter in a case study.
The number of equipment available at each stage is constrained up to three to make the analysis simple in this article.
Designs for stages with more than three equipment can also be developed by taking similar steps proposed in this article. It is
assumed by the dispatching rule that the next available equipment is searched immediately when the designated equipment
by the selected equipment path is occupied, regardless of expected waiting time for the specific equipment. It would be
more appealing to process engineers if the equipment allocation is determined by the cost-based approach than the risk-
based approach proposed in this article. Such considerations are left as further studies for improvement of the equipment
allocation strategy for multi-stage processes.
Acknowledgments
The authors would like to thank the associate editor and the referees for their valuable comments and suggestions.
Works of YongBin Lim and JongHee Chung were supported in part by Basic Science Research Program through the National
Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2014R1A1A2002032)
and BK21 Plus Project through the National Research Foundation of Korea (NRF) funded by the Ministry of Education
(22A20130011003), respectively. Changsoon Park’s work was supported by the Chung-Ang University research grant in
2014.
Appendix. R codes
A.1. R code for the generation of a product design given in Table 2
A.2. R code for drawing the half normal probability plot based on the LGB method
10. Y. Lim et al. / Journal of the Korean Statistical Society 44 (2015) 366–375 375
A.3. R code for respecting the hierarchy of effects and getting weighted LSE of effects
References
Lawson, J. (2012). daewr: Design and Analysis of Experiments with R, R package version 1.0–10.
Lawson, J., Grimshaw, S., & Burt, J. (1998). A quantitative method for identifying active contrasts in unreplicated factorial designs based on the half-normal
plot. Computational Statistics and Data Analysis, 26, 425–436.
Lenth, R. V. (1989). Quick and easy analysis of unreplicated factorials. Technometrics, 31, 469–473.
Loh, W. L. (1992). Identification of active contrasts in unreplicated factorial experiments. Computational Statistics and Data Analysis, 14, 135–148.
Monod, H., Bouvier, A., & Kobilinsky, A. (2013). A quick guide to PLANOR, an R package for the automatic generation of regular factorial designs. Technical report.
MIA Unit, INRA Jouy en Josas.