Online Training 2017 - Introduction to Design of Experiments
Date2017-04-18
Deadline2017-04-18
VenueNew Hyde Park, USA - United States
KeywordsDesign of experiments; Design of experiments online; Methods in statistical experim
Websitehttps://bit.ly/2nka1ck
Topics/Call fo Papers
Overview:
Design of Experiments is a vital tool for process improvement and root cause analysis. It allows experimenters to determine beyond a quantifiable reasonable doubt that an experiment improves the process, or that there is a difference between treatments; choices among the factors of the cause and effect diagram. While comprehensive mastery of DOE requires full college-level courses, enough basics can be taught in an hour to enable the attendee to know the key considerations in the design of an experiment, and also interpretation of its results. The material on hypothesis testing will also position the attendee to understand statistical process control (SPC) and acceptance sampling.
Why Should You Attend:
A comparison of experiments conducted by Frederick Winslow Taylor (prior to the development of industrial statistics) and by a 20th century pharmaceutical company underscores the value of DOE. Taylor wished to optimize the conditions for metal cutting, and he identified twelve factors (such as depth and duration of the cut) that he believed to influence the work). He then added (Principles of Scientific Management, 1911), "It may seem preposterous to many people that it should have required a period of 26 years to investigate the effect of these twelve variables upon the cutting speed of metals."
In addition, "In studying these laws more than 800,000 pounds of steel and iron was cut up into chips with the experimental tools, and it is estimated that from $150,000 to$200,000 was spent in the investigation." This was a small fortune in the late 19th century and, to make matters even worse, Taylor probably did not come up with the optimum solution because he had to hold eleven variables constant while he experimented with the twelfth. One variable at a time experimentation cannot identify much less quantify interactions between factors, i.e. synergies or antagonisms that make the whole greater or less than the sum of the parts. Only later were the statistical sciences developed that would have allowed Taylor to create the desired shop floor algorithm to optimize the conditions for every metal cutting job.
In contrast, DuPont had to determine the effect of nineteen factors on the sensitivity of a test for the AIDS virus. It took only four weeks, with the aid of modern DOE, to identify the effects and interactions between 19 factors. (Cusimano, Jim. "Understanding and Using Design of Experiments," Quality, April 1996.) This underscores the decisive competitive (in terms of time necessary to obtain information and make decisions) and economic (the cost of the experiment) advantages of DOE.
Areas Covered in this Webinar:
DOE offers enormous advantages in terms of delivery of results that are (1) timely, (2) informative, and (3) relatively cheap in terms of time and material expended.
The basics of DOE include understanding of factors and levels. A factor is often a category straight from the cause and effect diagram: Material, Machine, Manpower, Method, Measurement, or Environment. The level is the choice from the factor, such as material from vendor A, B, or C. In measurement systems analysis (MSA), or gage reproducibility and repeatability, an experiment determines whether choice of inspector affects the measurement. This is the reproducibility component of R&R.
It is common in high school science classes to hold all factors constant but one, and one variable at a time experimentation works if only one variable is involved. When multiple factors are involved, however, (as was the case in the Taylor and DuPont experiments) interactions may come into play. An interaction means the whole is greater or less than the sum of its parts, i.e. the effects of the individual factors. Awareness of interactions is vital in effective DOE.
Designed experiments also use the key concept of hypothesis testing, which is in fact central to almost everything we do with statistics. Hypothesis tests begin with a null hypothesis (null means "nothing") which is similar to the presumption of innocence in a criminal trial. We begin by assuming that there is no difference between the control and the experiment in DOE. The null hypothesis in statistical process control (SPC) is that the process is in control and does not require adjustment. The starting assumption in acceptance sampling is that the production lot is acceptable. We must then prove the alternate hypothesis beyond a quantifiable reasonable doubt, which is known as the significance level, Type I risk, or producer's risk (the risk of rejecting an acceptable lot in acceptance sampling).
There is also a Type II risk (the consumer's risk of accepting a rejected lot in acceptance sampling) of not rejecting the null hypothesis when it should be rejected. If the Type I risk is that of the boy who cried wolf, the Type II risk is that the boy does not see the wolf.
A good experiment will exclude extraneous variation through randomization (such as processing specimens in random order to avoid effects that might come from the order in which they are processed) and blocking, or assignment of different treatments to a homogenous specimen (such as a homogenous plot of land in agricultural experiments). Replication means repeating treatment combinations to get more data.
The presentation will also introduce the factorial design, which allows the screening of multiple factors to determine which are and are not important.
Learning Objectives:
Know the competitive and economic implications of effective DOE.
Understand the basics of hypothesis testing. These fundamentals carry over into SPC and acceptance sampling as well as DOE.
Know how to interpret the results of an experiment in terms of its significance level, the quantifiable "reasonable doubt" of concluding wrongly that the experimental treatment differs from the control.
Know the meaning of factors, levels, and interactions.
Know the basics of how to exclude extraneous variation sources from an experiment to ensure that observed differences come from the factors under study.
Who Will Benefit:
Manufacturing and quality managers and technicians, and also decision makers who need to work effectively with subject matter experts.
Speaker Profile:
William A. Levinson, P.E., is the principal of Levinson Productivity Systems, P.C. He is an ASQ Fellow, Certified Quality Engineer, Quality Auditor, Quality Manager, Reliability Engineer, and Six Sigma Black Belt.
He is also the author of several books on quality, productivity, and management, of which the most recent is The Expanded and Annotated My Life and Work: Henry Ford's Universal Code for World-Class Success.
For more detail please click on this below link:
http://bit.ly/2nka1ck
Email: referrals-AT-complianceglobal.us
Toll Free: +1-844-746-4244
Tel: +1-516-900-5515
Fax: +1-516-900-5510
Design of Experiments is a vital tool for process improvement and root cause analysis. It allows experimenters to determine beyond a quantifiable reasonable doubt that an experiment improves the process, or that there is a difference between treatments; choices among the factors of the cause and effect diagram. While comprehensive mastery of DOE requires full college-level courses, enough basics can be taught in an hour to enable the attendee to know the key considerations in the design of an experiment, and also interpretation of its results. The material on hypothesis testing will also position the attendee to understand statistical process control (SPC) and acceptance sampling.
Why Should You Attend:
A comparison of experiments conducted by Frederick Winslow Taylor (prior to the development of industrial statistics) and by a 20th century pharmaceutical company underscores the value of DOE. Taylor wished to optimize the conditions for metal cutting, and he identified twelve factors (such as depth and duration of the cut) that he believed to influence the work). He then added (Principles of Scientific Management, 1911), "It may seem preposterous to many people that it should have required a period of 26 years to investigate the effect of these twelve variables upon the cutting speed of metals."
In addition, "In studying these laws more than 800,000 pounds of steel and iron was cut up into chips with the experimental tools, and it is estimated that from $150,000 to$200,000 was spent in the investigation." This was a small fortune in the late 19th century and, to make matters even worse, Taylor probably did not come up with the optimum solution because he had to hold eleven variables constant while he experimented with the twelfth. One variable at a time experimentation cannot identify much less quantify interactions between factors, i.e. synergies or antagonisms that make the whole greater or less than the sum of the parts. Only later were the statistical sciences developed that would have allowed Taylor to create the desired shop floor algorithm to optimize the conditions for every metal cutting job.
In contrast, DuPont had to determine the effect of nineteen factors on the sensitivity of a test for the AIDS virus. It took only four weeks, with the aid of modern DOE, to identify the effects and interactions between 19 factors. (Cusimano, Jim. "Understanding and Using Design of Experiments," Quality, April 1996.) This underscores the decisive competitive (in terms of time necessary to obtain information and make decisions) and economic (the cost of the experiment) advantages of DOE.
Areas Covered in this Webinar:
DOE offers enormous advantages in terms of delivery of results that are (1) timely, (2) informative, and (3) relatively cheap in terms of time and material expended.
The basics of DOE include understanding of factors and levels. A factor is often a category straight from the cause and effect diagram: Material, Machine, Manpower, Method, Measurement, or Environment. The level is the choice from the factor, such as material from vendor A, B, or C. In measurement systems analysis (MSA), or gage reproducibility and repeatability, an experiment determines whether choice of inspector affects the measurement. This is the reproducibility component of R&R.
It is common in high school science classes to hold all factors constant but one, and one variable at a time experimentation works if only one variable is involved. When multiple factors are involved, however, (as was the case in the Taylor and DuPont experiments) interactions may come into play. An interaction means the whole is greater or less than the sum of its parts, i.e. the effects of the individual factors. Awareness of interactions is vital in effective DOE.
Designed experiments also use the key concept of hypothesis testing, which is in fact central to almost everything we do with statistics. Hypothesis tests begin with a null hypothesis (null means "nothing") which is similar to the presumption of innocence in a criminal trial. We begin by assuming that there is no difference between the control and the experiment in DOE. The null hypothesis in statistical process control (SPC) is that the process is in control and does not require adjustment. The starting assumption in acceptance sampling is that the production lot is acceptable. We must then prove the alternate hypothesis beyond a quantifiable reasonable doubt, which is known as the significance level, Type I risk, or producer's risk (the risk of rejecting an acceptable lot in acceptance sampling).
There is also a Type II risk (the consumer's risk of accepting a rejected lot in acceptance sampling) of not rejecting the null hypothesis when it should be rejected. If the Type I risk is that of the boy who cried wolf, the Type II risk is that the boy does not see the wolf.
A good experiment will exclude extraneous variation through randomization (such as processing specimens in random order to avoid effects that might come from the order in which they are processed) and blocking, or assignment of different treatments to a homogenous specimen (such as a homogenous plot of land in agricultural experiments). Replication means repeating treatment combinations to get more data.
The presentation will also introduce the factorial design, which allows the screening of multiple factors to determine which are and are not important.
Learning Objectives:
Know the competitive and economic implications of effective DOE.
Understand the basics of hypothesis testing. These fundamentals carry over into SPC and acceptance sampling as well as DOE.
Know how to interpret the results of an experiment in terms of its significance level, the quantifiable "reasonable doubt" of concluding wrongly that the experimental treatment differs from the control.
Know the meaning of factors, levels, and interactions.
Know the basics of how to exclude extraneous variation sources from an experiment to ensure that observed differences come from the factors under study.
Who Will Benefit:
Manufacturing and quality managers and technicians, and also decision makers who need to work effectively with subject matter experts.
Speaker Profile:
William A. Levinson, P.E., is the principal of Levinson Productivity Systems, P.C. He is an ASQ Fellow, Certified Quality Engineer, Quality Auditor, Quality Manager, Reliability Engineer, and Six Sigma Black Belt.
He is also the author of several books on quality, productivity, and management, of which the most recent is The Expanded and Annotated My Life and Work: Henry Ford's Universal Code for World-Class Success.
For more detail please click on this below link:
http://bit.ly/2nka1ck
Email: referrals-AT-complianceglobal.us
Toll Free: +1-844-746-4244
Tel: +1-516-900-5515
Fax: +1-516-900-5510
Other CFPs
- 8th CEBU International Conference on Recent Trends in Engineering and Technology
- 2017 MACE The 2nd International Conference on Material and Chemical Engineering
- The 3rd International Research Symposium on Engineering and Technology
- Fifth IEEE International Workshop on Emerging COgnitive Radio Applications and aLgorithms (IEEE CORAL 2017)
- 5th International Workshop on Smart City and Ubiquitous Computing Applications (SCUCA 2017)
Last modified: 2017-03-15 15:51:49