AHP Online Free

Free multi-criteria decision tool: build your AHP hierarchy, compare pairs with the Saaty scale, validate consistency, and export the project.

Project

Define the context, goal, and description of the decision.

Persistence

Export the project, import a saved file, or open the guided flow to add alternatives, criteria, and judgments.

Draft saved automatically in this browser.

Configure alternatives, criteria, and judgments in the guided flow.

Decision dashboard

The system recalculates weights, consistency, and ranking in real time.

Visual hierarchy

Goal, criteria, subcriteria, and alternatives in the analysis.

Final result

Consolidated ranking, global weights, and alternative share chart.

AHP Online Free - a free multi-criteria decision tool

Making complex decisions requires structuring criteria, recognizing trade-offs, and reaching a result that can be audited. The Analytic Hierarchy Process (AHP) is the most widely used multi-criteria method in the world for this purpose, and this free tool brings the entire workflow to your screen with no spreadsheet and no installation.

What is the Analytic Hierarchy Process

AHP was developed by mathematician Thomas L. Saaty in the 1970s while he was working at the Wharton School of the University of Pennsylvania. It was formally published in 1977 in the article “A scaling method for priorities in hierarchical structures” and consolidated in the book The Analytic Hierarchy Process (1980). The method quickly became a global reference in multi-criteria decision making (MCDM).

The central idea is to decompose a complex problem into a hierarchy of smaller elements - goal, criteria, subcriteria, and alternatives - and use pairwise comparisons to quantify subjective preferences. The result is a mathematically grounded weighting system that reflects the decision maker’s judgment.

The hierarchical structure

Every AHP analysis is organized into three mandatory levels:

  1. Goal (top): what you want to decide - for example, “Select the best software supplier”.
  2. Criteria and subcriteria (middle): the factors that influence the decision - for example, cost, timeline, technical support, and security.
  3. Alternatives (base): the options competing for the top rank - for example, Supplier A, B, and C.

Subcriteria extend the hierarchy by creating thematic groups. A criterion such as “Cost”, for example, may contain subcriteria such as “Annual license”, “Implementation cost”, and “Support”. Only leaf criteria (the last level of the tree) receive direct evaluation of the alternatives.

The Saaty fundamental scale

To compare two elements, Saaty proposed a 1 to 9 scale based on the psychophysics of human perception:

DegreeDefinitionExplanation
1Equal importanceEqual contribution to the objective
3Moderate importanceOne element is slightly favored over another
5Strong importanceOne element is strongly favored over another
7Very strong importanceOne element is very strongly favored over another
9Extreme importanceHighest possible order of affirmation of one element over another
2, 4, 6, 8Intermediate valuesUsed when interpolation between judgments is needed

Reciprocal values (1/3, 1/5, 1/9, and so on) express the opposite direction: if A is 5 times more important than B, then B is 1/5 as important as A. The judgment matrix is built automatically with that reciprocity property.

The 1 to 9 scale is not arbitrary. Saaty derived it from the Weber-Fechner law, which states that human perception of differences is proportional to the logarithm of stimulus intensity. The 1-9 range covers almost the entire span of human perceptual discrimination for two magnitudes compared without direct measurement context.

The pairwise comparison matrix

Given nn elements (criteria or alternatives), the judgment matrix A is a square matrix of order n×nn \times n, where each element aija_{ij} represents the strength of preference of element ii over element jj:

  • The main diagonal is always 1 (every element is equal to itself).
  • If aij=ka_{ij} = k, then aji=1/ka_{ji} = 1/k (reciprocity).
  • Only the upper triangle needs to be filled in - the lower triangle is inferred.

The total number of judgments required for nn elements is n(n1)/2n(n-1)/2. For 3 criteria: 3 comparisons. For 5: 10. For 7: 21.

Deriving the priority vector

Once the matrix is filled, the priority vector can be obtained by two methods:

Exact method - eigenvectors

The priority vector w is the principal eigenvector of matrix A, associated with the largest eigenvalue (λmax\lambda_{max}). Mathematically:

A · w = λmax · w

This is the most precise method and requires spectral decomposition of the matrix.

Approximate method - geometric mean method (GMM)

For practical use, the geometric mean of the rows provides an excellent approximation and is computationally simple.

First, each matrix cell is normalized by dividing the element by the column total - Equation 1 (normalization of the judgment matrices):

wi(Aj)=Aiji=1nAijw_i^{(A_j)} = \frac{A_{ij}}{\sum_{i=1}^{n} A_{ij}}

Where: AijA_{ij} is the judgment matrix for a criterion and AjA_j is the column vector.

The local weight of each element is then the average of its row in the normalized matrix. For each row, the nnth root of the product of all its elements is calculated and normalized by dividing by the total sum. This tool uses that method, which is widely recognized as equivalent to the eigenvector approach for matrices with low inconsistency.

Hierarchical synthesis - local and global weights

Each criterion receives a local weight (its relative importance within its parent group). The global weight is the product of the local weights along the entire hierarchical path:

global_weight(leaf) = local_weight(parent criterion) × local_weight(leaf)

For multi-level subcriteria, all local weights along the path from the goal to the leaf are multiplied. The sum of all leaf global weights always results in 1 (or 100%).

The final score of each alternative (User Index - UI) is obtained by summing the products of all criteria, subcriteria, and normalized subcriterion values, according to Equation 4:

UI=j=1mi=1nPcj×Psci×VsciUI = \sum_{j=1}^{m} \sum_{i=1}^{n} P_{c_j} \times P_{sc_i} \times V_{sc_i}

Where:

SymbolDescription
cccriterion associated with the analyzed subcriterion
scscsubcriterion
mmnumber of criteria
nnnumber of subcriteria
VscV_{sc}normalized subcriterion value (range 0 to 1)
PcP_c, PscP_{sc}weight of the criterion and the subcriterion, respectively

The UI represents the proportion each alternative holds relative to the others from the evaluator’s perspective. The alternative with the highest UI is the recommended choice according to the defined criteria and judgments (Saaty, 1991).

Consistency verification

One of Saaty’s most important contributions was the creation of a metric to measure the internal coherence of judgments.

Why inconsistency happens

In a perfectly rational comparison, if A is 3 times more important than B, and B is 2 times more important than C, then A should be 6 times more important than C. In practice, humans rarely maintain that kind of transitivity across longer comparison chains.

Consistency Index (CI)

The λmax\lambda_{max} of a perfectly consistent matrix is equal to nn. When inconsistency exists, λmax>n\lambda_{max} > n. The CI measures that deviation - Equation 3:

CI=λmaxnn1CI = \frac{\lambda_{max} - n}{n - 1}

Where: λmax\lambda_{max} is the largest eigenvalue (principal eigenvector) of the judgment matrix and nn is the order of the matrix (the number of compared criteria or alternatives).

λmax\lambda_{max} can be calculated from the priority vector w\mathbf{w}:

λmax=1ni=1n(Aw)iwi\lambda_{max} = \frac{1}{n} \sum_{i=1}^{n} \frac{(\mathbf{A} \cdot \mathbf{w})_i}{w_i}

Random Index (RI)

To compare CI against a reference threshold, Saaty experimentally calculated the expected CI value for randomly filled matrices - the Random Index (RI). This value varies with matrix order:

nn12345678910
RI0.000.000.580.901.121.241.321.411.451.49

Source: Adapted from Saaty (1991).

Consistency Ratio (CR)

The Consistency Ratio is given by Equation 2:

CR=CIRICR = \frac{CI}{RI}

Where: CI is the Consistency Index, RI is the Random Index, and CR is the Consistency Ratio.

Accepted thresholds according to Saaty:

  • CR <= 0.10 - acceptable judgment, proceed.
  • 0.10 < CR <= 0.20 - reasonable consistency, review is recommended.
  • CR > 0.20 - strong inconsistency; review the judgments before trusting the result.

For 2x2 matrices, consistency is trivial and CR is not calculated.

How to reduce CR

  1. Identify the most discrepant judgment pair.
  2. Adjust the slider for that pair toward a value that reduces the contradiction.
  3. Do not change all judgments randomly - focus on the most problematic pair.
  4. Recalculate and verify whether CR improved.

Alternatives evaluated by direct values

Pure AHP requires subjective judgments for every criterion. But in many real-world problems, some criteria already have objective data available: price in dollars, time in days, a score on a scale, energy consumption in kWh. In those cases, the tool lets you enter the values directly and automatically converts them into local weights.

For criteria where higher is better (maximization):

local_weight(i) = value(i) / Σ values

For criteria where lower is better (minimization):

local_weight(i) = (1 / value(i)) / Σ (1 / values)

The result is equivalent in meaning to classical AHP: each alternative receives a local weight between 0 and 1 that reflects its relative performance for that criterion.

Sensitivity analysis

Beyond the base ranking, the tool includes a practical sensitivity analysis layer so you can test the robustness of the decision. Instead of accepting the final result as fixed, you can simulate controlled changes and observe whether the leader remains stable or whether another alternative takes first place.

In practice, sensitivity analysis helps answer questions such as:

  • if a leaf criterion becomes more important, does the winner stay the same;
  • which pairwise judgment is most likely to reverse the ranking;
  • if a direct objective value increases or decreases, does the result change;
  • which criteria make the decision more robust and which make it more fragile.

What this tool simulates

The sensitivity workflow covers the main leverage points of the model:

  • leaf-criterion weight variation: adjusts the relative participation of each terminal criterion and recalculates the ranking;
  • AHP judgment step simulation: tests how changing the intensity of a pairwise comparison affects the result;
  • direct-value simulation: changes objective values of the alternatives in quantitative criteria and synthesizes the local weights again;
  • combined criterion scenarios: applies simultaneous adjustments to more than one criterion to test cumulative effects;
  • combined alternative scenarios: evaluates how coordinated changes in alternative performance affect the leading position.

Winner robustness

One of the most useful outputs is the identification of the winner’s stable range. The tool estimates how far a criterion’s weight can go up or down before a leader switch occurs in the ranking.

This matters because two analyses may have the same first place but very different levels of robustness:

  • if small changes already switch the leader, the decision is sensitive and deserves review;
  • if the winner remains stable even under broad adjustments, the decision is more robust.

Managerial interpretation of sensitivity

In practice, sensitivity analysis helps turn AHP into a more auditable decision tool because it shows not only who won, but why the option won and how stable that win is.

The main benefits are:

  • identifying the criteria that are truly decisive;
  • locating critical judgments that deserve review in meetings;
  • explaining to stakeholders where the result is solid and where it is fragile;
  • documenting alternative scenarios in the final report.

From an executive standpoint, this helps distinguish a ranking that is merely plausible from one that is genuinely reliable.

Comparison with other MCDM methods

AHP is not the only multi-criteria decision-making method. Each method has distinct foundations, strengths, and limitations:

MethodMathematical basisHandles inconsistencyEase of use
AHPEigenvectors / geometric meanYes (CR)High
TOPSISEuclidean distance to the ideal solutionNot explicitlyMedium
ELECTREOutranking relationsNoLow
VIKORCompromise between majority and minorityNoMedium
PROMETHEEGeneralized preference functionsNoMedium
ANPSupermatrix with dependenciesYesLow

AHP stands out for its mathematical transparency, its ease of explanation to stakeholders, and its ability to combine qualitative and quantitative criteria in the same analysis.

Extensions of classical AHP

Fuzzy AHP (FAHP)

It replaces exact Saaty scale values with triangular or trapezoidal fuzzy numbers to capture the uncertainty and vagueness inherent in human judgments. It is widely used in contexts where the decision maker is not sure of the exact value but can say something like “between 3 and 5”.

Group AHP

When there are multiple decision makers, individual judgments need to be aggregated. The two most common methods are:

  • AIJ (Aggregation of Individual Judgments): aggregates the judgments before calculating weights, using the geometric mean.
  • AIP (Aggregation of Individual Priorities): each decision maker calculates weights separately, then they are aggregated with a weighted geometric mean.

ANP - Analytic Network Process

ANP is an extension of AHP created by Saaty himself to model dependencies and feedback between criteria. It uses a supermatrix to capture interdependencies among elements at the same level or across different levels. It is highly expressive, but significantly more complex to model and interpret.

When to use AHP

AHP is recommended when:

  • there are multiple conflicting criteria that need to be weighted;
  • part of the criteria is qualitative or subjective;
  • the decision process needs to be documented and audited;
  • different decision makers need to reach a structured consensus;
  • the decision has significant consequences, such as investments, partner selection, or strategic definition.

Typical applications include:

  • supplier or contractor selection;
  • evaluation of proposals in procurement processes;
  • project prioritization in a portfolio;
  • industrial site selection;
  • IT strategy definition;
  • candidate evaluation in hiring processes;
  • civil, environmental, and logistics engineering decisions;
  • academic research in management, health, and social sciences.

Persistence and traceability

The draft is automatically saved in the browser’s localStorage. The exported JSON file contains the complete state: hierarchy, judgments, weights, and metadata. This makes it possible to:

  • resume exactly where you left off;
  • review judgments across separate meetings;
  • perform internal auditing of the decision process;
  • document the analysis in reports, dissertations, and presentations.

Limitations of this version

This tool covers classical AHP with support for direct values. The following features are not included in this version:

  • Fuzzy AHP with triangular numbers;
  • ANP with a supermatrix;
  • automatic aggregation of multiple decision makers;
  • integration with an external database;
  • real-time multi-user collaboration;
  • probabilistic sensitivity analysis with Monte Carlo simulation.

Frequently asked questions

What happens if I have only one criterion?

With only one leaf criterion, the ranking is determined exclusively by the judgments or direct values of that criterion. Consistency verification does not apply in contexts with 1 or 2 elements.

Can I mix AHP and direct values in the same analysis?

Yes. Each leaf criterion can use the mode that makes the most sense. A qualitative criterion can use AHP pairwise comparisons, while a quantitative criterion can use direct values. The system synthesizes both automatically in the same hierarchy.

How many alternatives and criteria can I use?

There is no hard technical limit imposed by the tool. In practice, matrices above 9x9 make the judgment process much more laborious and the interpretation of weights less intuitive. Saaty recommended not exceeding 9 elements per comparison group.

What is a leaf criterion?

It is the criterion at the last level of the hierarchy - the one that has no child subcriteria. Only leaf criteria receive direct evaluation of the alternatives. A criterion with children is evaluated only relative to its siblings and does not receive direct alternative evaluation.

Why is CR greater than zero even with few criteria?

With 2 elements, CR is always zero (a 2x2 matrix is trivially consistent). With 3 or more, any judgment cycle already produces some CR > 0. The goal is to keep it below 0.10, not to force it to zero.

What is the bibliographic reference for the method?

The main reference is: Saaty, T. L. (1980). The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. McGraw-Hill. The seminal article is: Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15(3), 234-281.