Differentiation Horizontal

The Differentiation Horizontal node is designed to take a list of Features, along with an optional list of Variations, and quantify the Horizontal Differentiation between each. The quantified Horizontal Differentiation between all of the Feature Variations is expressed as a Correlation Matrix.

When Features (or Products) cannot be rank ordered in an objective way then they are said to exhibit Horizontal Differentiation. This means that while Customers may, on average, agree that the value of one Feature Variation is the same as the value of another Feature Variation, those Customers may disagree as to which of the two is better. There is Horizontal Differentiation because sentiment about the first Feature Variation is uncorrelated with sentiment about the second Feature Variation. In other words, Horizontal Differentiation is high when Correlation is low.

For example, the Correlation between 'Coca Cola' branded beverages versus 'Pepsi Cola' branded beverages may be 0.0 or even negative (suggesting that Pepsi-drinkers actually hate Coke, and visa-versa). These Products, distinguished primarily by their strong and independent Brands, both enjoy high levels of profitability because of their Horizontal Differentiation.

On the other hand, when Features can be objectively ranked then they are said to exhibit Vertical Differentiation. Horizontal Differentiation is low when Correlation is high.

For example, the Correlation between a '1-year warranty' and a '2-year warranty' will be very close to 1.0 as all Customers universally agree that 2-years is better than 1-year. Hence the success of these Products will not depend upon their negligible Horizontal Differentiation but upon their Vertical Differentiation.

More Help: Examples and sample workflows can be found at the Scientific Strategy website: www.scientificstrategy.com.

Options

Standard Options

Rank Order of Features
If the Features are Ranked (Ordinal) then there is a stronger Correlation relationship between adjacent Features than distant Features. For example, the way Customers think about '5-star' hotels will correlate more with how they think about '4-star' hotels than '3-star' hotels. But if the Features are Not Ranked (Categorical) then the Correlation relationship between all Features will be set the same. For example, the way Customers think about the Feature 'style' will be no more-or-less Correlated with 'color' than with 'ambience'. These Correlations can be modified by the user if necessary using the 'Correlation Matrix To Pairs' node and the 'Correlation Pairs To Matrix' node.
Objective versus Subjective Features
What is the relationship between all of the Features in the Input Feature List? Are there very clear and objective differences that all Customers would agree on? Or are the differences between the Features subjective such that Customers would disagree as to the relative value of those Features? For example, 'storage size', 'engine capacity', and 'number of megapixels' are all highly objective Features, and Customers cannot disagree on their relative value (Customers would all agree that 128 GB of storage capacity is better than merely 64 GB, but some still may not be willing to pay for the difference). On the other hand, 'style', 'color', and 'ambience' are highly subjective Features, and Customers would rank differently their preference for 'red', 'green', and 'blue'. Yet other Features may be neither entirely objective or subjective. For example, 'container size' may be 'Somewhat Subjective' as Customers may generally like the greater capacity but dislike the inconvenience of a heavier container.
Correlation between Features
When an 'Objective versus Subjective Features' option is selected (from above) it will update this numeric Correlation between all Features. This numeric value can also be manually set by the user or set by a Flow Variable, and it is this option that is ultimately used by the node's internal algorithm (the 'Objective versus Subjective Features' setting is ignored). The Correlation between the related input Features is limited here to between +1.0 and 0.0. Highly Objective Features have a Correlation = 0.95. Very Subjective Features have a Correlation = 0.20. If the Features are Ranked (Ordinal) then the Correlation between more distant Features will step down exponentially. For example, the Correlation between Mostly Objective Features two rankings apart is: Correlation x Correlation = 0.8 x 0.8 = 0.64.

Format Options

Output Name Format
Defines the format of the Column Names and Row Names in the Output Correlation Matrix. As all Customer Distributions ultimately need to have both Horizontal Differentiation (a Correlation Matrix) and Vertical Differentiation (Mean and Standard Deviation/SD values), these Customer Distribution names must match the names in the Vertical Differentiation tables. The names must also match the names in the Input Product Features table used when aggregating together all of the Features that make up each Product in the Market.
Feature Variation Name Delineator
Sets the delineator character between the Feature and the Variation in the columns and rows of the Output Correlation Matrix. By default, the delineator character is set to be a '.' period, but ',' comma, '_' underscore, or ' ' space may better suit the user's simulation.

Input Ports

Icon
Input Related Features: The collection of related Feature names. These may be ordinal Features related by the fact that they can be ranked. For example, the Feature List may be '5-star', '4-star', '3-star', and '2-star'. Or these may be categorical Features that are not ranked but are nevertheless related. For example, the Features 'Japanese', 'Korean', and 'German' will be correlated (Customers generally perceive the two Asian Products as being more similar to each other than to the European Products). Unrelated Features, having no Correlation, should be generated using several of these Differentiation Horizontal nodes. Note that an input table having just a single Feature is quite normal and, in fact, desirable. It is mathematically possible to select a set of orthogonal (uncorrelated) Features that describe the Products in a Market (see "Rotations in Factor Analysis"). In this case, each orthogonal Feature should be created using different 'Differentiation Horizontal' nodes, with perhaps simple Variations generated for each Product. The Input Related Features should include the following columns:
  1. Feature (string): The name of all the related Features that will appear within the Output Correlation Matrix. The Horizontal Differentiation, along with the Vertical Differentiation, of the Feature needs to be described to generate a Customer Distribution and build a Product Willingness To Pay (WTP) Matrix.
Icon
Input Feature Variations: (optional) A Variation of a Feature may be associated with a Brand, Product, Channel, Demographic, or Technology. A Variation may also be an Attribute from Conjoint Analysis, such that: Variation = Attribute, and Feature = Level. If the Variation is the name of a Brand, then all Products having the same Brand will exhibit the same Variation on the Feature. For example, 'Sony', 'Samsung', 'Canon', and 'Apple' may all offer their own Variations of the Features listed. The Brands 'Sony', 'Samsung', and 'Canon' may all have a Conformity = 0.95 (they all offer a normal Feature with only a little distinction), whereas 'Apple' may have a Conformity = 0.20 because Apple's Variation is highly distinctive. Note that these values do not describe whether 'Sony' is better or worse than 'Apple'. Horizontal Differentiation describes only whether Customers view the Features and Variations as similar or different. Vertical Differentiation is also required to determine which is 'better'. The Input Feature Variations should include the following columns:
  1. Variation (string): The Variation name to give to each of the related Features. For example, if a Feature is 'Horse Power' then the Variations might be 'Diesel Engine', 'Gasoline Engine', and 'Electric Engine'.
  2. Feature (string): (optional) If a Feature is specified in the Input Feature Variations list, then only the specified Features will have the Variation. If the Feature column is missing, or if the Feature cell is blank, then all Features will have this Variation.
  3. Conformity (double): (optional) The degree of Conformity the Variation has from a Feature norm (range limited to between +1.0 and 0.0). Conformity = 1.0 (default) means that the Variation precisely offers what is expected from the normal Feature. Conformity = 0.0 means that the Variation is vastly different and unpredictable from the norm. Conformity = 0.95 is typical, and would be used to generate a range of Features that all offer small Variations around what is accepted as a Feature norm. In this example, 'Diesel Engine' and 'Gasoline Engine' might both have a Variation = 0.9 while 'Electric Engine' might have a Variation = 0.3.

Output Ports

Icon
Output Correlation Matrix: The output set of correlations that define the relationship between Feature Variations and downstream Customer Distributions. The Correlation Matrix will be symmetrical such that the number of data rows match the number of columns. Each row [Feature].[Variation] name will be unique and correspond to a column of the same name. The Output Correlation Matrix will contain these columns:
  1. Distribution: The row name of the [Feature].[Variation] within the Output Correlation Matrix.
  2. Order: The Order of each unique Feature if Features are Ranked (Ordinal).
  3. Correlated Distributions: The column name of the [Feature].[Variation] within the Output Correlation Matrix, along with the degree of correlation to the row [Feature].[Variation]. Output correlations will be symmetrical and range-limited to -1.0 and +1.0.
Icon
Output Correlation Repaired Matrix: The repaired output set of correlations that define the relationship between Feature Variations and downstream Customer Distributions. Repairing is required when the correlations are unrealistic. For example, if A is highly correlated to B (for example, A:B = +0.99) and if A is highly correlated with C (for example, A:C = +0.99) then B must be highly correlated with C (that is, B:C >> 0.0). More precisely, the Correlation Matrix must have all positive definite Eigenvalues. Note that it is not necessary for downstream nodes that generate Customer Distributions (such as the Matrix Distributions node or the Feature Generation node) to use this Correlation Repaired Matrix as these downstream nodes will always first self-repair the Input Correlation Matrix. The Output Correlation Repaired Matrix will contain the same columns as the Output Correlation Matrix:
  1. Distribution: The row name of the [Feature].[Variation] within the Output Correlation Repaired Matrix.
  2. Order: The Order of each unique Feature if Features are Ranked (Ordinal).
  3. Correlated Distributions: The column name of the [Feature].[Variation] within the Output Correlation Matrix, along with the repaired degree of correlation to the row [Feature].[Variation]. Output correlations will be symmetrical and range-limited to -1.0 and +1.0.
Icon
Output Correlation Error Matrix: The difference between the Output Correlation Matrix and the Output Correlation Repaired Matrix. This is a convenience output to show how the Correlation Matrix needs to be repaired before Customer Distributions for the Feature Variations can be generated. The Output Correlation Error Matrix will contain the same columns as the Output Correlation Matrix:
  1. Distribution: The row name of the [Feature].[Variation] within the Output Correlation Error Matrix.
  2. Order: The Order of each unique Feature if Features are Ranked (Ordinal).
  3. Correlated Distributions: The column name of the [Feature].[Variation] within the Output Correlation Matrix, along with the difference between the output correlation and the repaired correlation.

Views

This node has no views

Workflows

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.