🧭 What you’re doing in Step 5 #
You already set Severity (S) in Step 4 from the end-user effect. Now you will:
- Rate Occurrence (O) — “How likely is the cause to happen given our prevention controls?”
- Rate Detection (D) — “How likely is it we’ll catch the failure before ship given our detection controls?”
- Use the AIAG-VDA Action Priority (AP) table to classify each row as High / Medium / Low priority for action.
Rule of thumb: Improve O with prevention, improve D with detection. Don’t mix them.
📊 Practical rating rubric (evidence-based) #
Occurrence (O) — rate the cause frequency (1 = remote, 10 = very high)
Use hard evidence wherever possible:
- Prevention design strength: poka-yoke, recipe lock, hard-stops, error-proof fixtures
- Capability: Cp/Cpk (or Pp/Ppk) vs. spec; stability (SPC)
- Historical data: FPY, defect ppm, audit findings, MTBF/PM compliance
| O range | Typical evidence & interpretation (examples) |
|---|---|
| 1–2 | Robust prevention + historical proof (Cpk ≥ 1.67, mistake-proof design, no escapes in 12 months) |
| 3–5 | Good controls but not bulletproof (Cpk ≈ 1.33, controlled via SPC/PM; rare issues) |
| 6–8 | Weak or manual prevention; recurring issues weekly; Cpk < 1.00 or unstable |
| 9–10 | New/untuned process, no prevention, frequent daily issues |
Detection (D) — rate the chance of NOT detecting (1 = almost certain detect, 10 = almost impossible to detect)
Consider coverage, timing, automation, and MSA:
- Coverage: 100% vs. sample; inline vs. offline
- Timing: early station vs. final EoL vs. no test
- Automation: interlocks, curve/limits auto-check vs. manual visual
- MSA quality: GR&R, masters, calibration, false-accept rate
| D range | Typical evidence & interpretation (examples) |
|---|---|
| 1–2 | Automated 100% detection at/near source with reliable interlock & proven MSA; cannot pass if bad |
| 3–5 | Automated 100% or strong test but later in flow / relies on trend/curve checks; solid MSA |
| 6–8 | Sample checks, manual visual, late/indirect test; MSA marginal |
| 9–10 | No control, or the failure is latent/undetectable prior to ship |
Prevention affects O (e.g., hard-stop, recipe lock). Detection affects D (e.g., LVDT 100%, leak bench results, MES gates). A “preventing” poke-yoke is not a detection control.
🔺 Action Priority (AP) — deciding what to fix first #
Use the AIAG-VDA AP table (S, O, D) to mark each row H / M / L:
- AP = High (H): Action is required (or you must strongly justify why not).
- AP = Medium (M): Consider action; justify if none taken.
- AP = Low (L): No action typically required.
Quick heuristics (teaching aide):
- If S ≥ 9, AP tends to be High unless both O and D are very low (≈1–2).
- If D ≥ 7 on any safety/regulatory row (S ≥ 8), AP likely High.
- Strong, early 100% detection (D ≈ 2–3) can move AP to Medium/Low when S < 9.
(Use your customer’s official AP table for final calls.)
🧪 Worked example — rating O/D and AP for key chains #
S values came from Lesson 5.4. Here we justify O and D from the current controls (no future actions yet).
| # | Station | Failure Mode (from 5.4) | S | O | D | AP | Why (O/D justification) |
|---|---|---|---|---|---|---|---|
| 1 | OP05 Press-fit | Depth out of spec / signature out | 10 | 4 | 3 | H | Hard-stop + recipe + LVDT 100% + SPC → O=4; inline 100% detection D=3. S=10 keeps AP High. |
| 2 | OP05 Press-fit | Spline/impeller crack (over-force) | 8 | 3 | 4 | M | Force window + signature catch most issues → O=3, D=4 (pattern-based). |
| 3 | OP06 Seal | Mis-orientation / lip damage | 8 | 3 | 2 | M/Low | Guided fixture + 100% vision gives strong detection (D=2). With O=3, AP often Low/Medium. |
| 4 | OP06 Seal | Shaft roughness high (Ra) | 8 | 4 | 6 | H | Prevention via COA + periodic audit (O=4), but detection mostly late (OP10/12) (D=6). |
| 5 | OP07 ESD | ESD event uncontrolled | 9 | 5 | 8 | H | Manual checks & shift audits only; latent failure → weak detection (D=8). Occurrence not rare (O=5) without continuous monitor. |
| 6 | OP07 Potting | Mass low / voids / under-cure | 7 | 4 | 7 | H | Recipe lock exists (O=4), but no guaranteed 100% in-station verification → late/latent D=7. |
| 7 | OP09 Torque | Under-torque | 9 | 3 | 3 | M | DC tool with 100% trace & socket ID (D=3) and good calibration (O=3). |
| 8 | OP09 Torque | Over-torque / cross-thread | 8 | 3 | 4 | M | Strategy + trace (D=4). Some risk from thread starts → O=3. |
| 9 | OP10 Pre-leak | Bench recipe mis-set / fixture leak | 8 | 4 | 4 | H/M | Recipe lock & daily master help, but fixture wear & set-up risk persist (O=4, D=4). Many customers still expect H closure here. |
| 10 | OP12 Final test | Test bypass / recipe wrong | 10 | 2 | 8 | H | Role/recipe control lowers O, but if bypass occurs there’s no back-stop → D=8. |
| 11 | OP12 Final flow | Meter mis-cal / clogged filter | 10 | 3 | 6 | H | Cal matrix exists (O=3). False pass risk until master/dual-check tightened → D=6. |
| 12 | OP01 Kitting | Wrong impeller variant | 10 | 3 | 3 | M | 100% scan & kit verify; still O=3 due to human/label risks; D=3 with MES gate. |
| 13 | Interface (MES) | “No scan → no progress” gate disabled | 10 | 2 | 9 | H | Rare override (O=2) but catastrophic if it happens (D=9). |
| 14 | OP08 Connector | Mis-seat / poor crimp | 7 | 4 | 5 | M | PM + vision/pull sample: O=4, D=5 (sampled). Strengthen either will move to Low. |
Use this table as a pattern: repeat for your remaining rows. When in doubt, err conservative on D if evidence is weak (e.g., no GR&R, no master part routine, or sampled manual checks).
📌 Building defensible ratings — the “evidence ladder” #
When auditors/customers ask “Why O=3? Why D=4?”, point to:
For O (prevention strength)
- Fixture design reviews, poka-yoke photos, recipe lock screenshots
- SPC stability/Cpk reports, PM compliance logs, supplier COAs with incoming capability
- Pilot/Run@Rate defect Pareto (frequency)
For D (detection coverage & quality)
- 100% vs. sample rationale; station placement (early vs. final)
- MSA studies (GR&R %, ndc), master part schedules/results, calibration certificates
- Curve storage, automated limit checks, interlocks, MES gate logs
- False-accept/false-reject data (where available)
🧾 Risk Analysis worksheet (columns to keep) #
- Function → Failure Mode → Effects (line & end user) → S
- Cause → O (with evidence note)
- Current detection → D (with evidence note)
- AP (H/M/L) → Decision (Action? Yes/No + rationale)
(You’ll add owners/dates and re-ratings in Step 6.)
✅ Outputs of Step 5 #
- ✅ Each PFMEA row has defensible O & D ratings tied to real evidence.
- ✅ AP marked to drive prioritization (H first, then M).
- ✅ A short list of must-fix chains ready for Step 6 (Optimization).
⚠️ Common pitfalls (and quick fixes) #
| Pitfall | Fix |
|---|---|
| Using RPN instead of AP | AIAG-VDA uses AP. Keep RPN out of decisions. |
| Rating by opinion | Attach evidence (SPC, MSA, PM, masters, audits). |
| Giving D too much credit for prevention | Remember: prevention → O; only checks/tests → D. |
| Late detection accepted as “good” | Final EoL only is weaker than in-station detection. |
| Not separating false-accept risk | Use masters, calibration, and curve plausibility checks. |
🔗 What’s next #
Proceed to Lesson 5.6 — Step 6: Optimization (Actions & Re-evaluation). We’ll convert AP=High/Medium rows into concrete actions, assign owners/dates, and re-rate O/D based on evidence after implementation.
🧠 Pro Tip #
If you can’t prove a control works (with data), rate it as if it doesn’t. Let actions in Step 6 earn you the lower O/D later.