In technical risk analysis, Detection in Failure Modes and Effects Analysis (FMEA) is the possibility that current detection actions will detect a failure cause or failure mode before it reaches to customer.
The Severity measures impact/consequences and Occurrence measures frequency, Detection will be the answers of below question:
“If the failure happens, how strong our controls are to detect/catch it in time?”
In the AIAG-VDA FMEA methodology, Detection plays an important role in the Risk Analysis step. Together with Severity and Occurrence, it helps to arrange risks using the Action Priority (AP).
What is Detection in FMEA? #
According to the AIAG-VDA standard, Detection rating is the number associated to the controls, and how effective the existing controls are in identifying a potential failure cause or mode.
- A low Detection rating (1-3) means the control system is highly effective and will almost identify the issue before it escapes.
- A high Detection rating (8-10) means the failure is unable to be detected and it will detect when it reaches the customer.
Guiding Principle for Detection Rating:
- DFMEA (Design FMEA): Detection focuses on design verification and validation methods which includes simulations, assembly tests and physical prototype/part testing, etc.
- PFMEA (Process FMEA): Detection focuses on process controls like inspection, error-proofing, process validation tests, in-process testing, and end-of-line checks.
Detection Rating Table (DFMEA vs PFMEA) #
The AIAG-VDA manual provides Detection Tables separately for DFMEA and PFMEA. Below is a simplified representation.
DFMEA – Design FMEA Detection Rating Table (AIAG-VDA)
Rating | Ability to detect | Detection methods | Opportunity to detect | Example (Automotive) |
---|---|---|---|---|
10 | Very low | No detection method.Failure not detectable | Test methods not defined | No design test exists for hidden cracks in ECU housing. |
9 | Very low | No specific test method to detect failure mode or cause | Pass-Fail, Test-to-Fail, Degradation Testing | Random prototype teardown without specific test for connector latch strength. |
8 | Low | New tests. Not proven. | Visual review of CAD model for thermal hotspots without simulation or validation testing. | |
7 | Low | Proven test method for verification but test timing is after development.If the test fails, production may be delayed due to re-design or re-tooling.. | Pass-Fail Testing | Lab endurance test for suspension arm but limited to few load cycles, not correlated to road conditions. |
6 | Moderate | Test-to-Failure | Bench vibration testing of ECU, but not all frequency ranges covered. | |
5 | Moderate | Degradation Testing | Thermal simulation validated by prototype heat soak test at room conditions only. | |
4 | High | Proven test method for verification with sufficient test timing.If the test fails, have sufficient time to re-design or re-tooling. | Pass-Fail Testing | Accelerated life test (ALT) for actuator fatigue with statistically significant sample size. |
3 | High | Test-to-Failure | Vibration + thermal cycling test for connector with proven correlation to warranty data. | |
2 | High | Degradation Testing | Crash simulation verified by full-vehicle crash testing confirming airbag deployment. | |
1 | Very high | Proven detection controls to always detect failure mode or failure cause.After testing confirmation, failure mode or causes cannot occur. | Design incorporates poka-yoke geometry + validated by simulation and 100% prototype functional testing. |
PFMEA – Process FMEA Detection Rating Table (AIAG-VDA)
Rating | Ability to detect | Detection methods | Opportunity to detect | Example (Automotive) |
---|---|---|---|---|
10 | Very low | No detection method.Failure not detectable | Failure cannot be detected. | Missing weld nugget on chassis bracket – no inspection, no detection method in place |
9 | Testing / Inspection not likely to detect failure mode. | Not easily detected using random audits. | Improper adhesive application in ECU potting – not included in audit scope | |
8 | Low | Test / Inspection not proven or effective.(e.g. plant has less or no experience) | Human inspection or use of manual gauging | Visual check of paint thickness by operator with no defined spec or training |
7 | Machine-based detection (Automatic or semi-automatic) or use of inspection equipment (e.g. CMM) | First-time use of vision system for orientation – no performance validation | ||
6 | Moderate | Test / Inspection are proven or effective.(e.g. plant has experience with methods) | Human inspection or use of manual gauging | Manual go/no-go gauge used for gear shaft diameter – operator trained, process stable |
5 | Machine-based detection (semi-automatic) or use of inspection equipment (e.g. CMM) | CMM measurement of machined engine block bore size | ||
4 | High | The system is proven and effective.(e.g. plant has experience with on similar processes) | Machine-based detection (automatic) will detect failure mode downstream. | Automated end-of-line (EOL) electrical test for faulty seat heater wiring. |
3 | Machine-based detection (automatic) will detect failure mode in-station. | In-station pressure test for brake caliper leak immediately after assembly. | ||
2 | The detection is proven and effective.(e.g. plant has experience with methods, error-proofing verification, etc.) | Machine-based detection (Detect the cause and prevent the failure mode from happening) | Poka-yoke sensor blocks misaligned steering column bolt insertion | |
1 | Very high | Proven detection controls to always detect failure mode or failure cause.Failure mode cannot occur as designed or processed. | RFID tags + shape-based fixture prevents wrong assembly |
Notes:
- Downstream detection (Rating 4) means the failure is caught later, e.g., EOL.
- In-station detection (Rating 3) catches it at the operation where it occurs.
- Poka-yoke (Rating 2) actively prevents or stops the error.
- Rating 1 implies the failure cannot happen due to built-in design/process safeguards.
Example of Detection (How to give detection ratings?) #
DFMEA Detection rating example
Let’s take a DFMEA example for an automotive ECU connector design:
- Failure Mode: Connector does not lock properly.
- Cause: Weak latch design.
- Current Detection Control: Prototype testing with limited number of samples, latch force test done only at room temperature
- Evaluation: Since the test may not detect all weak latch cases, Detection rating is assigned 6 (moderate).
If later a 100% endurance test or simulation with field correlation is added, then Detection rating could be improved to 3.
Example of Detection in PFMEA #
Let’s take a PFMEA example for Assembly Process for Automotive Brake Caliper:
- Failure Mode: Piston not fully pressed (incomplete seating).
- Cause: Pressing force not sufficient or misalignment during pressing.
- Current Detection Control: In-station pressure test detects leak immediately after pressing
- Evaluation: The test is performed immediately after the operation (Automatic in-station machine-based detection), It detects the failure mode (leak due to poor press-fit). Detection rating is assigned 3 (High).
How Detection Fits in the 7-Step FMEA Process #
Detection is assigned in Step 5 – Risk Analysis of the AIAG-VDA 7-Step approach.
First we identify the Current detection controls then we assign detection rating based on how strong our detection is.
Detection is also assigned in Step 6 – Optimization of the AIAG-VDA 7-Step approach.
If our existing detection is weak and we have a high action priority, then our aim is to reduce that risk. To reduce the risk we have many factors, but we reduce using a detection point of view. Let’s say we have added strong detection controls in optimization then our rating could go down and risk may decrease.
For brief:
- Severity → How serious is the effect?
- Occurrence → How often will it happen?
- Detection → How likely are we to catch it before customer impact?
So, the combination of these three leads to Action Priority (AP: High, Medium, Low), guiding you on where to apply optimization efforts.
How to Improve Detection in FMEA #
To reduce the Detection rating, teams must improve the detection controls by identifying strong detection actions. Best practices are:
- Error-Proofing (Poka-Yoke): Designing processes or products so failures cannot escape undetected.
- Automation: Using sensors, vision systems, or end-of-line testers instead of manual checks.
- Simulation & Validation: Correlating virtual and physical tests with performance measures.
- 100% Inspection: Especially in safety-critical components.
- Statistical Sampling with Capability Studies: Make sure process reliability in detection.
Common Mistakes to Avoid #
- Relying on operator visual inspection as the primary detection control.
- Confusing occurrence prevention controls with detection controls.
- Not updating detection ratings when new inspection or test methods are added.
Key Takeaways #
- Detection in FMEA = How easily that failure will be caught before reaching customer.
- Lower Detection rating = better controls.
- DFMEA and PFMEA use slightly different rating scales based on design vs process controls.
- Improving detection requires automation, error-proofing, and robust validation methods.
Yes, Detection focuses only on the ability of controls to identify failures, not on how often they occur.
No, the scale ranges from 1 (best) to 10 (worst).
Introduce poka-yoke, 100% automated checks, and end-of-line functional tests.
Higher Detection rating increases the AP level, meaning higher risk and priority to take corrective actions.