A3RF offers three assessment tiers, each designed for different organizational needs, timelines, and depth of analysis. All tiers use the same underlying agent architecture but differ in scope, evidence depth, and deliverable breadth.
The Rapid Assessment is designed for organizations that need a quick baseline understanding of their readiness posture. It is ideal for initial scoping, executive briefings, or competitive benchmarking.
Timeline: 3-5 business days from evidence submission to final deliverables.
Inputs: Automated questionnaire responses, up to 10 uploaded documents, optional 30-minute stakeholder interview.
Analysis: All six agents run but with reduced evidence depth. Agents focus on primary indicators rather than exhaustive analysis. Extended thinking is used only on the Meta-Agent synthesis step.
Deliverables: Executive Scorecard with composite AI Readiness Index, high-level pillar scores (1.0-5.0), top-5 critical gaps, and a one-page recommended action summary. No detailed pillar reports or transformation roadmap at this tier.
The Deep Dive assessment provides comprehensive analysis across all five pillars with full evidence correlation and detailed recommendations. This is the standard tier for organizations seriously planning transformation initiatives.
Timeline: 10-15 business days from evidence submission to final deliverables.
Inputs: Comprehensive questionnaire, up to 50 uploaded documents, API endpoint scanning (if authorized), 2-4 stakeholder interviews (60 minutes each), existing architecture documentation.
Analysis: Full pipeline execution with extended thinking enabled on all critical reasoning steps. Cross-signal exchange includes secondary and tertiary correlations. Confidence calibration uses industry benchmarking data.
Deliverables: Executive Scorecard, Detailed Pillar Reports (5 reports, 10-20 pages each), Gap Analysis Matrix with severity ratings, Transformation Roadmap with phased recommendations, preliminary Statement of Work for top-priority initiatives, interview summary analysis.
The most comprehensive assessment tier, designed for organizations ready to commit to a full transformation program. Includes all Deep Dive components plus ongoing monitoring, custom modeling, and implementation planning.
Timeline: 20-30 business days for initial assessment, plus 90-day monitoring period.
Inputs: Everything in Tier 2, plus live system access for automated scanning, process mining data exports, full security audit reports, organizational chart and RACI matrices, budget and resource allocation data.
Analysis: Maximum depth across all agents. Multiple analysis passes with iterative refinement. Custom scoring models calibrated to the organization's industry vertical. Benchmark comparison against industry peers (anonymized). Extended thinking enabled on all agent steps.
Deliverables: Everything in Tier 2, plus Custom Maturity Model calibrated to industry vertical, Detailed Implementation Plan with resource requirements, Technology Stack Recommendations with vendor comparison, Change Management Playbook, 90-Day Progress Monitoring Dashboard with automated re-scoring, ROI Projection Model, Board-ready presentation deck.
Welcome to the A3RF Intelligence Platform. This section walks you through every step of your experience as a customer, from initial onboarding through to receiving and acting on your assessment deliverables.
When your organization initiates an engagement, you will receive an email invitation with a secure link to create your A3RF platform account. After setting your password and configuring multi-factor authentication (MFA), you will land on your engagement dashboard. Your consultant will have pre-configured the engagement with the appropriate assessment tier, deployment model, and pillar focus areas.
Your first task is to complete the intake questionnaire, which adapts its questions based on your selected assessment tier. The questionnaire covers organizational context, current technology landscape, strategic priorities, and known pain points. Expect to spend 30-60 minutes on this step for Tier 1, or 2-3 hours for Tier 2/3.
Navigate to the Evidence tab to upload supporting documents. The platform accepts PDF, DOCX, XLSX, PPTX, PNG, JPG, and CSV files. Each upload should be tagged with the relevant pillar(s) it relates to. The platform will auto-classify documents using its Document Intelligence capabilities, but manual tagging improves accuracy.
Common evidence types include: architecture diagrams, data flow diagrams, API documentation, security audit reports, compliance certifications, process documentation, data governance policies, and integration inventories. Upload as much relevant documentation as possible -- the AI agents produce higher-confidence assessments with more evidence.
Once the assessment completes, your primary view is the Executive Scorecard. The scorecard displays your composite AI Readiness Index (a weighted average across all five pillars) and individual pillar scores on a 1.0 to 5.0 scale. Each score is accompanied by a confidence percentage indicating how certain the analysis is, based on evidence quality and completeness.
Color coding follows a consistent scheme: red (1.0-1.9) indicates critical gaps requiring immediate attention, orange (2.0-2.9) indicates developing capabilities with significant room for improvement, yellow (3.0-3.4) indicates defined processes that need refinement, blue (3.5-4.4) indicates managed and measurable capabilities, and green (4.5-5.0) indicates optimized, industry-leading practices.
Each pillar score maps to a named maturity level: 1.0-1.9 is Initial (ad hoc, reactive), 2.0-2.9 is Developing (emerging practices, inconsistent), 3.0-3.4 is Defined (standardized, documented), 3.5-4.4 is Managed (measured, controlled), and 4.5-5.0 is Optimizing (continuous improvement, industry-leading). Sub-dimension scores roll up into pillar scores using weighted averages, where weights are determined by the relative importance of each dimension to organizational readiness.
The confidence percentage next to each score indicates how much evidence supports the assessment. A score of 3.5 with 92% confidence is highly reliable. A score of 3.5 with 60% confidence suggests the AI agents had limited evidence and the actual maturity level could be higher or lower.
For Tier 2 and Tier 3 assessments, the platform generates a draft Statement of Work (SOW) outlining recommended next steps, estimated effort, timeline, and investment. You can review the SOW in the Deliverables tab, leave comments on specific sections, request revisions from your consultant, and ultimately approve or request changes. Approval is tracked with a full audit trail.
All deliverables are available in the Deliverables tab. You can view them interactively in the platform or download them as PDF or DOCX files. The Executive Scorecard is also available as a single-page PDF suitable for board presentations. Deliverables remain accessible in the platform for 12 months after engagement completion, and you can request extensions through your consultant.
A3RF uses a continuous 1.0 to 5.0 maturity scale that provides granular differentiation between organizations at similar stages of readiness. Scores are computed at the sub-dimension level and aggregate upward through weighted averages to pillar scores and the composite AI Readiness Index.
Level 1 -- Initial (1.0-1.9): Processes are ad hoc and reactive. No formal practices exist for the assessed dimension. Work depends on individual heroics rather than repeatable processes. Organizations at this level have significant risk exposure and are unprepared for advanced workloads in this dimension.
Level 2 -- Developing (2.0-2.9): Emerging practices exist but are inconsistently applied. Some documentation may be present but is not comprehensive or up to date. The organization recognizes the need for improvement and may have pilot initiatives underway, but has not achieved standardization.
Level 3 -- Defined (3.0-3.4): Standardized processes are documented and followed across the organization. Roles and responsibilities are clearly assigned. However, measurement and continuous improvement mechanisms are not yet mature. This level represents the minimum threshold for reliable advanced workload support.
Level 4 -- Managed (3.5-4.4): Processes are not only defined but actively measured and managed using quantitative metrics. The organization uses data to identify and address process deviations. Automation supports key processes. This level indicates strong readiness for implementation with manageable risk.
Level 5 -- Optimizing (4.5-5.0): The organization demonstrates continuous improvement driven by quantitative process feedback. Innovation is systematic. Best practices are regularly reviewed and updated. The organization is a peer benchmark in this dimension. This level represents industry-leading capability.
Raw scores from agents are adjusted by their confidence percentages before aggregation. The formula for a confidence-weighted pillar score is: Pillar Score = Sum(dimension_score * dimension_weight * confidence) / Sum(dimension_weight * confidence). This means low-confidence scores have less influence on the aggregate, preventing poorly-evidenced assessments from skewing overall results.
When confidence is below 50% for any dimension, the platform flags it as 'Insufficient Evidence' and excludes it from aggregation entirely, rather than allowing a low-quality score to distort the pillar assessment. The flagged dimension appears in the Gap Analysis Matrix as a recommended area for additional evidence gathering.
Consultant score overrides modify the final score for a specific sub-dimension. The override replaces the agent score in the aggregation calculation but the original agent score is preserved in metadata for audit purposes. Overrides require a written justification that becomes part of the assessment record.
The Meta-Agent's executive summary will note any overrides, including the magnitude of change and the consultant's justification. This ensures full transparency for the customer. Override history is tracked per engagement and per consultant, providing quality assurance data for practice leadership.
The A3RF platform supports keyboard shortcuts for power users to navigate quickly and efficiently. These shortcuts are available on all pages unless an input field is focused.
Cmd+K (or Ctrl+K on Windows/Linux) -- Open the Command Palette. The command palette provides quick access to navigation, actions, and search across the entire platform. Start typing to filter available commands.
Shift+/ (which is ?) -- Toggle the Platform Guide. Opens or closes this guide panel from any page.
Shift+T -- Toggle Theme. Switches between dark mode and light mode.
Escape -- Close Overlays. Closes the currently open overlay, modal, or panel. Works for the guide panel, command palette, and any modal dialogs.
Cmd+Enter -- Submit/Confirm. Submits the current form or confirms the current dialog. Useful for saving questionnaire responses or confirming assessment runs.
Cmd+/ -- Toggle Sidebar. Expands or collapses the main navigation sidebar for more screen space.