ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Industrial Statistics 2017 - Introduction to Industrial Statistics 2017

Date2017-04-13 - 2017-04-14

Deadline2017-04-12

VenueHilton Garden Inn Salt Lake City Airport 4975 Wiley Post Way, Salt Lake City, Utah, 84116, USA, USA - United States USA - United States

KeywordsIntroduction to Industrial Sta; Clinical Trials Regulation; Upcoming hipaa webinars

Websitehttps://www.globalcompliancepanel.com/co...

Topics/Call fo Papers

Overview:
This is an introductory course in industrial statistics that will equip the attendee to understand what he or she needs to know about basic descriptive statistics such as measurements of central tendency (average) and variation (range and standard deviation), and to use graphical methods such as the box and whisker plot to visualize these statistics for data sets. The concepts of variation and accuracy, and their effects on outgoing quality, will be introduced at the beginning. The basic data visualization tools of the histogram and Pareto chart also will be presented.
The next major subject will be statistical hypothesis testing, the foundation of almost everything we do in industrial statistics. The material is applicable not only to statistical process control and acceptance sampling (both of which will be discussed in this course) but also to design of experiments.
Attribute data (the kind that must be counted as integers, such as defects and nonconformances) will then be addressed through the hypergeometric, binomial, and Poisson distributions. The binomial distribution will be applied to the ANSI/ASQ Z1.4 plan for sampling by attributes to illustrate its application. Discovery sampling also will be addressed here, along with awareness of sequential sampling and its advantages.
Variables data are continuous scale data, the kind that are measured with real numbers. Models include the normal (bell curve distribution) and, for comparison of sample properties, the t and chi square distributions. Confidence intervals can be determined for the mean of a population on the basis of the sample average and its known or estimated standard deviation. The Central Limit theorem, which says that the averages of large samples will behave like a normal distribution regardless of the normality of the underlying population, also will be addressed.
The purpose of statistical process control (SPC) is to distinguish between random or common cause variation that is inherent in a process, and special or assignable cause variation that means there is a problem with the process. SPC begins with a discussion of the rational subgroup, or a sample that accounts for all the variation in a process. It is important to select it correctly if SPC is to work properly.
Attribute control charts include charts for the number nonconforming (np) and the number of defects (c). The np chart is based on the binomial distribution, and the c chart on the Poisson distribution. (The p chart for fraction nonconforming and u chart for defect density serve similar purposes.)
Charts for variables data are far more powerful, i.e. better able to detect process shifts, than attribute charts. The X chart is for individual measurements, and the x-bar/R (sample average and range) and x-bar/s (sample average and sample standard deviation) are for samples. Variables data also make it possible to calculate process capability and process performance indices. If these indices are substantially different, it means that the rational subgroup has not been selected properly.
The Anderson-Darling test for goodness of fit, along with the normal probability plot, can detect the presence of a non-normal distribution for the process data. If the data are not normally distributed, the traditional textbook methods will not work properly, but alternative methods are fortunately available.
Measurement systems analysis (MSA), or gage reproducibility and repeatability, allows the scientific estimation of gage precision, or the ability of the gage to get the same measurement consistently from the same specimen. Precision is not the same thing as accuracy, which is ensured by calibration, although both are important. Accuracy means that, on average, the measurement will equal the true measurement of the specimen, while precision reflects the ability to get the same measurement (right or wrong) consistently.
Why should you attend :
An understanding of descriptive statistics and hypothesis tests provides the foundation for everything we do with industrial statistics. This includes:
? Statistical process control (SPC), which is covered in this course
? Design of Experiments (DOE), a powerful approach for testing assumptions during root cause analysis, process improvement, and generation of mathematical models for physical and chemical processes
? Acceptance sampling procedures such as ANSI/ASQ Z1.4 (formerly MIL-STD 105) and ANSI/ASQ Z1.9 (formerly MIL-STD 414). This course will show how some of the basic material applies to ANSI/ASQ Z1.4
? Measurement systems analysis (MSA), or gage reproducibility and repeatability. This also is covered in this course.
Most statistical courses rely on the assumption that process data follow the normal or bell curve distribution. The bell curve is far more common in textbooks than it is in real processes, and mistaken reliance on it can deliver very misleading and inaccurate results. As an example, the process performance indices (such as Ppk, which is often requested by customers) can be off by orders of magnitude in terms of the estimated nonconforming fraction of the work (defects per million opportunities). Another way to say this is that a purported Six Sigma process might not even meet the minimum requirements for a capable process. This course will address the issue of the non-normal distributions that are common in real world industrial processes along with techniques for detecting them.
Who will benefit:
? Manufacturing Engineers
? Managers, and Technicians
Agenda:
Lecture 1:
? How variation and accuracy affect quality; your process as a musket or a rifle.
? Basic statistics: measurements of central tendency and variation.
? Graphical tools: histogram, Pareto chart, box and whisker plot, and scatter diagram. The histogram, Pareto chart, and scatter diagram are three of the traditional seven quality tools (the others being the process flow diagram, control chart, cause and effect diagram, and check sheet or tally sheet).
---
Lecture 2:
? Compound events (and/or relationships) and application to series and parallel reliability and rolled throughput yield (RTY)
? Statistical hypothesis testing; null and alternate hypothesis, and associated risks
? Discrete (attribute) data distributions: hypergeometric, binomial, and Poisson distributions
? Discovery sampling
---
Lecture 3:
? The normal or bell curve distribution
o Cumulative normal distribution, and estimation of the nonconforming (out of specification) fraction of a process
o The Six Sigma process, and what it really means
o Introduction to process capability indices
o Properties of sample averages, and the Central Limit Theorem
? The t distribution
? The chi square distribution; how to test a hypothesis regarding the variation of a process. As an example, did a proposed improvement reduce the process variation?
Day 2 Schedule
Lecture 1:
? Control charts (SPC)
o Special or assignable cause variation ("something is wrong") versus random or common cause variation (inherent in the process)
o SPC and hypothesis testing
? The Rational Subgroup; define it properly to ensure that SPC works properly
? Control charts for attribute data (data that must be measured with integers)
o np chart for number nonconforming
o c chart for number of defects
? Control chart interpretation
? Western Electric zone tests
---
Lecture 2:
? Estimation of process parameters for real-number or continuous scale data
? Tests for normality (bell curve): normal probability plot and Anderson-Darling test
o If the process data are not normal, process performance indices cannot be calculated accurately with the textbook methods. Alternative methods are however available.
? X chart for individual measurements
? x-bar/R and x-bar/s charts for sample statistics
? Effects of autocorrelation and between-batch variation
o How control charts indicate improper selection of the rational subgroup
---
Lecture 3:
? Measurement System Analysis (MSA), or gage reproducibility and repeatability
o Precision versus accuracy
o Repeatability and reproducibility
o How to perform a MSA
? Attribute gage performance metrics
Speaker:
William A. Levinson,
William A. Levinson, P.E., is the principal of Levinson Productivity Systems, P.C. He is an ASQ Fellow, Certified Quality Engineer, Quality Auditor, Quality Manager, Reliability Engineer, and Six Sigma Black Belt. He is also the author of several books on quality, productivity, and management, of which the most recent is The Expanded and Annotated My Life and Work: Henry Ford's Universal Code for World-Class Success.
Location: Salt Lake City, UT Date: April 13th & 14th, 2017 and Time: 9:00 AM to 6:00 PM
Venue: WILL BE ANNOUNCED SOON, Salt Lake City, UT
Price:
Register now and save $200. (Early Bird)
Price: $1,295.00 (Seminar Fee for One Delegate)
Until February 28, Early Bird Price: $1,295.00 From March 01 to April 11, Regular Price: $1,495.00
Register for 5 attendees Price: $3,885.00 $6,475.00 You Save: $2,590.00 (40%)*
Quick Contact:
NetZealous DBA as GlobalCompliancePanel
Phone: 1-800-447-9407
Fax: 302-288-6884
Email: support-AT-globalcompliancepanel.com
Website: http://www.globalcompliancepanel.com
Registration Link - http://www.globalcompliancepanel.com/control/globa...
Follow us on LinkedIn: https://www.linkedin.com/company/globalcompliancep...?
Like us our Facebook page: https://www.facebook.com/TrainingsAtGlobalComplian...
Follow us on Twitter: https://twitter.com/GCPanel

Last modified: 2017-02-27 20:51:12