Univariate Estimation Editor

Use the Univariate Estimation Editor option to perform grade estimations on your block model. The Univariate Estimation Editor allows you to set up parameters used to perform inverse distance, simple kriging, ordinary kriging, indicator kriging, and uniform conditioning for a single variable.

Note:  To set up parameters to perform calculations for multiple variables, use the Multivariate Estimation Editor instead. Use the Validation Editor to perform Jack Knife and Cross Validation.


Instructions

On the Block menu, point to Grade Estimation, then click Univariate Estimation Editor.

Samples Selection

Follow these steps:

  1. Enter a name for the Estimation file, or select it from the drop-down list. The drop-down list displays all files found in the current working directory that have the (.bef) extension. Click Browse to select a file from another location.

    Note:  The information entered here is also used to provide the name of the report that is generated and stored in your project folder. The naming convention used for the report is <bef_file>_<estimation id>.bef_report. The reports can be opened with Vulcan"s Text Editor, or any other text editor.

    Tip

    You can make changes to an existing estimation file, then click the Save As icon to save the edited version under a new name. This allows you to transfer all the parameter settings from the Estimation Editor to the new file without having to enter everything in again.

    1. Enter or select an Estimation ID. To create a new estimation ID, click the New icon as shown below, and provide a unique name for the current panel settings. Up to nine separate IDs can be created for each BEF file.

      Figure 1 : Click to create a new estimation ID.

      The ID Description is optional and is not required. However, they can be very helpful, especially when you are creating multiple IDs.

      If you need to edit these fields you may do so by clicking the icon to open the estimation file Options panel, then make your changes.

      Figure 2 : Click to open the estimation file Options panel.

      Setting up multiple estimation passes

      Click the Estimation Passes icon located at the top of the panel, as shown below.

      Tip:  This should be done after you have completed the main panel entries for the initial estimation pass.

  2. Select the Estimation Mode from the options in the drop-down list. There are three options available.

    1. Block Model Estimation - Perform standard estimation using non-declustered data and customised distances to samples. This is the most common type of estimation. The estimators available when using this option are inverse distance, simple kriging, ordinary kriging, indicator kriging, and uniform conditioning.

    2. Estimation Declustering - Use declustered data while estimating. The estimators available when using this option are inverse distance, simple kriging, and ordinary kriging

      Note

      Using this option will require you to select a declustering weight variable from the block model. If your block model does not have a declustering weight variable, you will not be able to use this option.

    3. Global Estimation - Perform estimation using non-declustered data and without limitations applied to search distances. The estimators available when using this option are simple kriging and ordinary kriging.

  3. Select the type of Estimator you want to apply. The options will change depending on your selection of estimator.

  4. Select the Block Model from the drop-down list. The drop-down list displays all files found in the current working directory that have the (.bmf) extension. Click Browse to select a file from another location.

  1. Specify the name of the database or mapfile that contains the sample data. The drop-down list labelled Select Database or Map File contains all of the Vulcan database files and mapfiles found within your current working directory. Click Browse to select a file from another location.

    Alternatively, you can use an ODBC link to connect to a database that contains the sample data. Click here for information about ODBC databases.

  2. Enter the name of the Sample Group (database key) to be loaded. Wildcards (* multi-character wildcard and % single character wildcard) may be used to select multiple groups.

    Note:  Multiple groups only apply to Isis databases (ASCII mapfiles consist of one group).

  3. Select the names of the fields containing the X, Y and Z coordinates from the Location field drop-down lists.

  4. Select the field that the estimates will be calculated from using the Grade field drop-down list.

  5. Select the Use variable weighting checkbox if you want to multiply the sample weights by the value of the specified weighting variable. The weighting variable may represent a specific gravity, sample length, or sample weight.

    A block with two samples is to be estimated. The first sample has a grade value of 2 and a weight variable of 1. The second sample has a grade value of 5 and a weight variable of 2.5. The weights, from ordinary kriging, are 0.8 for the first sample and 0.2 for the second sample. Without variable weighting the estimated grade is:

    (0.8 × 2 + 0.2 × 5) = 2.6

    With variable weighting the estimate is:

    (0.8 × 1 × 2 + 0.2 × 2.5 × 5) ÷ ( 0.8 × 1 + 0.2 × 2.5 ) = 3.15385

    If the variable weighting is 0 for a block, then the default grade value is stored.

  6. In the section labelled Sample Database Manipulation you can apply the base logarithm function to all sample values. Click Apply logarithm, and then enter the constant that you want to use.

    Note:   All original sample values must be positive for the logarithm to be defined.

    The specified logarithm constant is added to the calculated logarithm.

  7. If you want to apply cut-offs to the grades used in the estimation, click Use cut grades. Specify a lower grade cut value (grades lower than this value are set to this value) and an upper grade cut value (grades above this value are set to this value).

Select Using Filter

Use this panel to filter your sample data by setting restrictions on samples values, triangulations used, character values within database fields, and field attributes. You can use any combination of the four filters, or not use any at all. This panel is optional.

Example:  The example above shows one possible way to filter data. The filters will ignore all default assay values of -99.0, include only samples that fall within the three vein triangulations, ignore any sample that has a GEOCOD string equal to "NONE", and include samples that have an attribute label of TQ1, TQ2, or TQ3.

The following steps show how to set your filters.

Estimation Result Variables

Use this panel to select the grade variable you want to run estimations on, along with any additional output variables. The only variable that is required is the initial grade variable. All of the output variables are optional and not needed to run your estimations.

Follow these steps:

  1. First, select the primary variable you want to use for the estimations by selecting it from the Grade variable drop-down list.

    Only valid variables will be displayed in the list.

  2. Enter a default value for blocks that are not estimated.

  3. Select any output variables you want to include in your estimation run. Depending on which estimation method you are using, the available output variables will change. The table below shows the entire list of output variables and the estimation methods in which they are used.

    To quickly populate the table with all the variables, click the Fill output variables button.

    Output Variable Type Output Variable Default Value ID SK OK IK UC
    Distance to Closest Sample (Anisotropic) au_ids_dist_c_an -99.0 * * *   *
    Distance to Closest Sample (Cartesian) au_ids_dist_c_ca -99.0 * * *   *
    Flag when Estimated au_ids_flag_variable 0.0 * * * * *
    Grade of the Closest Sample (Anisotropic) au_ids_grade_c_an -99.0 * * *   *
    Grade of the Closest Sample (Cartesian) au_ids_grade_c_ca -99.0 * * *   *
    Maximum Weight au_ids_max_weight -99.0 * * *    
    Minimum Weight au_ids_minimum_weight -99.0 * * *    
    Number of Holes au_ids_num_holes 0.0 * * * * *
    Number of Samples au_ids_num_samples 0.0 * * * * *
    Sample Coeff. of Variation au_ids_sample_coeff -99.0 * * *    
    Sample Grade Minus Estimation au_ids_sample_grade_diff -99.0 * * *    
    Sample Mean au_ids_sample_mean -99.0 * * *    
    Sample Standard Deviation au_ids_sample_stddev -99.0 * * *    
    Sample Variance au_ids_sample_variance -99.0 * * *    

    The first column labelled Output Variable Type shows a description of the variable. This field cannot be edited.

    The second column labelled Output Variable is the variable name that your block model will use. This field can be edited if desired.

    Note:  The name of a variable can have a maximum of 30 alphanumeric characters. The variable, which can only be entered using lowercase characters, must start with a letter and can only contain alphanumeric characters and/or underscores, for example, variable_1.

    This is followed by the Default Value. This field can also be edited.

  4. If you are using Simple Kriging as your estimation method, the option to manually define a mean will be enabled.

    There three options to choose from.

    1. Use stationary mean - Select this option if you want to manually define the stationary mean value.

    2. Use mean variable - Select this option to take the locally varying mean into consideration. The mean variable is used for the sample mean and the block mean.

      With this option, you can set the default value for all samples that are located outside the model area by entering a value into Drift value.

      Note:  The mean variable value is used if a block is found on XYZ coordinates (of the sample), and the stationary mean value is used if not. In earlier version of Vulcan these were exclusive. To keep the previous behaviour, set a value of 0.0 for the stationary mean.

    3. Use block mean variable - Select this option to use the variable that holds just the block mean and not the sample mean.

Inverse Distance

Use this panel to enter the anisotropic weightings applied to each direction of the search axes. The panel will display different options depending on whether you are using inverse distance or kriging methods for your estimation.

Follow these steps if you are using Inverse Distance.

  1. In the space provided for Anisotropic Radii, enter the weightings for the X, Y, and Z directions. These weightings are a ratio of the lengths of the major, semi-major and minor radii. Thus the major (X) axis weighting is 1 (one), the semi-major (Y) axis weighting is the length of the major radius divided by the length of the semi-major radius, and the minor (Z) axis weighting is the length of the major axis radius divided by the length of the minor axis radius. Please note that it is possible to use equivalent ratios. That is 1:0.5:0.2 is the same as 10:5:2. The weightings are the inverse of the radii ratios.

    Example:  If the ratio is 1:0.5:0.2, then the weightings are 1, 2 and 5. The Distances to Samples panel (in Grade Estimation) and the New Estimation File option uses these weights for the Anisotropic distance derived from the anisotropic weights option.

    If desired, you can click Normalise to calculate the anisotropic weightings normalised to the search radii. The distances used here are those entered into the Standard Major, Semi-major, and Minor axis settings found on the Search Region pane. You may still edit the weightings.

    Example:  If the ratio is 1:0.5:0.2, then the weightings are 1, 2 and 5. The Distances to Samples panel (in Grade Estimation) and the New Estimation File option uses these weights for the Anisotropic distance derived from the anisotropic weights option.

  2. Enter the power to use on the inverse distance by entering the number in the Power textbox. A value of 2.0 means inverse distance squared while a value of 3.0 means inverse distance cubed, etc.

Variography

  1. Select a variogram file. This step is optional. You can choose to enter the variogram information directly into the table if you desire to do so. To select the file, use the drop-down list labelled Read variogram from file, or click the Browse button if the file is located somewhere other than your current working directory.

    All the files that have a (.vrg) extension will be shown in the list.

    Note:  To generate a variogram file, you can use an exported file from the Data Analysis tools located in the Analyse menu.

  2. Enter the Nugget. This represents the random variability and is the value of the variogram at distance (h) ~ 0.

  3. Complete the table defining the variogram parameters to be used. Enter the information by typing into the space provided for each column.

    Tip:  If you want to use block model variables instead, enable the Use block model variables checkbox, then use the drop-down lists to populate each field. The drop-down lists will automatically display all eligible variables from the block model selected at the top of the panel.

    Explanation of table columns

Search Region

Use this panel to set the search direction and how far to look for samples.

Follow these steps:

  1. Select the shape of your search area. Select Ellipsoid or Box.

    The major radius is "a", the semi-major radius is "b" and the minor radius is "c".

    Note:   The volume of a box with sides 2a, 2b, and 2c is about twice the volume of an ellipsoid with radii a, b, and c. Therefore, when using the box search you collect about twice as many sample points. This causes an estimation using a box search to take from two to eight times as much time as an estimation with a search ellipsoid.

  2. Set the search orientation.

    The Bearing, Plunge, and Dip values are angles, in degrees, that specify the orientation of the search ellipsoid and orientation of variogram structures.

    Care must be taken with these parameters as there are several common misunderstandings about their meaning.

    Bearing

    Bearing is the angle of the Major axis as it rotates clockwise from the north. It is the angle of the strikeline of the orebody.

    Plunge

    Plunge is the angle of the Semi-major axis as it rotates around the minor axis. Note that the plunge should be negative for a downward pointing ore body.

    Dip

    Dip is the angle of the minor axis of an orebody as it rotates around the Major axis. Think if it as the angle that an orebody tips from side to side when looking down the strikeline.

    Note:   The terms bearing, plunge and dip have been used by various authors with various meanings. In this panel, as well as kriging and variography, they do not refer to true geological bearing, plunge, and dip. The terms X', Y', and Z' axis are used to denote the rotated axes as opposed to X, Y, and Z which denote the axes in their default orientation.

  3. Enter the dimensions of the search box. The search box has sides with length twice the numbers given. The major axis radius is the search distance along the axis of the ore body. The semi-major radius is the search distance in the ore body plane perpendicular to the ore body axis. The minor axis radius is the search distance perpendicular to the ore body plane.

    Note

    The search radii are true radii. If you set your major search radius to '100', then the ellipsoid has a total length of 200. The following diagram shows the relationship between the axes with the ellipse in the default orientation (bearing 90°, plunge and dip 0.00°).

    Relationship between radii

  4. If you are using a tetrahedral model, you can select the Unfold checkbox to select a (*.tetra) file from the Unfolding spec file drop-down list. This list will contain all (*.tetra) files found within your current working directory. Refer to the Unfolding section for more information on tetrahedral modelling.

    Note

    When using an Unfolding spec file, the panel will read the type of method used in the Unfolding Model (LVA, Bend or Projection) and lock the panel options according to the parameters set in each of these methods.

  5. Click the Interactive button to display the Edit Interactive Ellipsoid panel define search ellipsoids on screen in Vulcan.  Once defined in Vulcan, the parameters will then be written back into the panel.

Discretisation Steps

Use this panel to set the parameters for the discretisation steps that will be used.

Note:  If you want data located at the block centroids to be used for block estimates, then run another estimation using a maximum of one (1) sample and storing the average distance to samples. Then run a block calculation or a script on the block model and assign the new grade value to your results with the condition that the average distance is zero (0) (provided the default is not set to 0).

Estimates are always made as block estimates. This means that if you have data at exactly the block centre, that data is not used as the value for the block in ordinary kriging or inverse distance. However, as the value of the data at a block centroid for the inverse distance method attains nearly all of the weighting, the estimate closely approximates the data value. Instead, all samples are used for kriging and inverse distance and an estimate is produced in the usual manner.

Note:  The panel is displayed if you have selected Inverse Distance, Simple Kriging, or Ordinary Kriging as an estimation method. It is not available if you are using Uniform Conditioning.

Follow these steps:

  1. Enter the number of steps you want to utilise into the spaces provided for Discretisation steps in X/Y/Z. The step sizes indicate how many grid points will be used inside a block or sub-block. Envisage uses the grid points for calculating block averages. For point kriging, each discretisation size should be set to 0 (zero). A large number of grid points can cause Envisage to run slowly. A 4 × 4 × 4 grid is usually sufficient. In no case should there be more than 10 000 grid points.

  2. If you are using Inverse Distance as the estimation method, you can elect to use the average of discretised block distances to samples when using discretisation for ID weight calculation, by enabling the Average distance for discretised blocks option.

    Note:  This option is not available if you are using kriging methods.

  3. Decide whether you want to estimate sub-blocks as if they were the parent blocks. This means that instead of using the block extent of the sub-block, the estimation procedure uses the extent of the parent block. To do this, enable the checkbox Assign parent block values to sub blocks.

    Suppose we have a primary scheme of block size of (50, 50, 50) and we are estimating a block whose extent ranges from (60, 30, 50) to (70, 40, 70). This block has sides of length 10, 10, and 20. The parent block that contains the sub-block ranges from (50, 0, 50) to (100, 50, 100). The parent block has sides of 50, 50 and 50 and has centre at (75, 25, 75). This block is estimated as if its centre were at (75,25,75) and as if its sides were 50, 50 and 50.

    If you decide to use this feature, you are presented with three options:

Distance to Samples

Use this panel to set the parameters to store the search distances used to look for your samples. Any, all, or none of the different distance measures can be stored by putting a block model variable name in the appropriate panel item. If you do not want to store a value, then leave the panel item blank.

Follow these steps:

  1. Enter a variable name into the appropriate box. There are three different kinds of distances that can be stored:

  1. Next, if you want to use a default that will be stored as an average distance if there aren't enough samples available to make an estimate, entering a number into Default distance when no estimate is made.

Sample Count

Use this panel to set the minimum and maximum number of samples that will be used to estimate the grade of a block.

Follow these steps:

  1. Enter the lowest number of samples in the box labelled Minimum number of samples per estimate.

    This is the minimum number of samples that needs to be found to generate an estimate. Blocks with less than this number of samples within the search ellipsoid or search box are assigned the default grade value.

  2. Enter the largest number of samples in the box labelled Maximum number of samples per estimate.

    This is the maximum number of samples to be used in any grade estimation. Up to 999 samples per estimate are allowed.

    Example:  The estimation program may find 30 samples near a block centre. If you had specified a maximum of 10 samples, then only the 10 samples closest to the block centre are used. The distance to the block centre is calculated by an anisotropic distance based on the search radii.

  3. Decide whether or not to use an octant search. The space around a block centre is divided into eight octants by three orthogonal planes.

    Note:  An octant search is a declustering tool used to reduce imbalance problems associated with samples lying in different directions. If there are more samples in one direction than another, then this option limits the bias.

    If want to use an octant search, enable the checkbox labelled Use Octant Based Search. This places a limit on the number of samples that can come from a given octant.

  4. Enter the maximum number of samples from each octant to be used in the estimation in the box labelled Maximum samples per octant. Samples closest to the block centre are used first.

    Note:  The maximum number of samples per estimate always applies, regardless of the maximum samples per octant value.

  5. Enable the checkbox labelled Additional restrictions if you want to limit the number of samples for octants. Then enter the number of minimum and minimum samples per octant.

    The minimum octants with samples enables you to specify the number of octants that must contain samples for an estimate to be generated. The minimum samples per octant enables you to specify the number of samples per octant that needs to be found to generate an estimate. These two restrictions work together. An octant is considered filled if it contains at least the minimum number of samples per octant. The minimum number of octants with samples requires that at least that number of octants be filled.

    If you set the minimum number of samples per octant to 2, the minimum number of octants to 3 and have the following number of samples per octant, there are two filled octants. As this is less than the minimum number of octants with samples, the default value is assigned to this block.

    Octant Number

    Number of samples per octant

    Filled / Not Filled

    0 1 Not filled
    1 3 Filled
    2 2 Filled
    3 1 Not filled
    4 1 Not filled
    5 1 Not filled
    6 1 Not filled
    7 0 Not filled
  6. Enable the checkbox labelled Store octants used if you want to save the number of octants in a block model variable.

  7. Enable the checkbox labelled Store octants information if you want to save information concerning which octants were used by each estimated block in a block model variable.

  8. Select octant rotation type.

    Rotate axes This consists of three planes perpendicular to axes that have been rotated 45° about the Z axis and 35° about the X' (X axis after rotation about the Z axis), this produces a set of planes in which the first has a bearing of 135°, the second has a bearing of 45° and is at angle of -55° to the horizontal and the third also has a bearing of 45°, but is at an angle of 35° to the horizontal. (See the diagram below.)
    Cartesian axes This consists of three planes perpendicular to the conventional Cartesian axes X, Y, and Z (or East, North, and elevation) axes.
    Ellipsoid axes This consists of three planes that are perpendicular to the major, semi-major, and minor axes of the search ellipsoid.

  9. Select the method by which the samples will be measured by choosing either Select by anisotropic distance or Select by Cartesian distance. The samples are sorted by distance (either anisotropic or Cartesian) prior to the samples being limited. This ensures that the closest samples are kept.

  10. To prevent the same sample from being used more than once, enter the distance to use in Distance to check duplicate samples. Samples less than or equal to the specified distance value are considered to be duplicates, resulting in the entire grade estimation process being stopped. You can disable this feature by specifying a distance value of '-1'.

Cutoff

Use this panel to set the parameters for the cutoff you want to evaluate.

  1. Enter the block model variable in which to store the indicator variable by selecting it from the Variable drop-down list.

  2. Specify the kriging variance from the variogram model for this cutoff by selecting it from the Variance drop-down list.

  3. Enter the Default value (the default value is '0'). This value is used for the indicator and variance when the block is not estimated.

  4. Enter the grade Cutoff level. The cutoffs should be entered from lowest cutoff to highest cutoff. A sample value equal to the cutoff is considered to be in the interval above the cutoff.

  5. Set up the structures by clicking the icon in the Structs column.

    Start by selecting a variogram file. This step is optional. You can choose to enter the variogram information directly into the table if you desire to do so. To select the file, use the drop-down list labelled Read variogram from file, or click the Browse button if the file is located somewhere other than your current working directory.

    All the files that have a (.vrg) extension will be shown in the list.

    Note:  To generate a variogram file, you can use an exported file from the Data Analysis tools located in the Analyse menu.

    Next, enter the Nugget. This represents the random variability and is the value of the variogram at distance (h) ~ 0.

    The next step is to complete the table defining the variogram parameters to be used. Enter the information by typing into the space provided for each column.

    Tip:  If you want to use block model variables instead, enable the Use block model variables checkbox, then use the drop-down lists to populate each field. The drop-down lists will automatically display all eligible variables from the block model selected at the top of the panel.

    Explanation of table columns

Post Indicator

Use this panel to set up the distribution. After calculating the indicator values for the cutoffs, you will have a discrete distribution. You can extend the discrete distribution to a continuous distribution with a certain mean and variance. Because of the difference between point and block samples, the variance of the distribution is too large. To compensate for this, it is desirable to reduce the variance of the distribution.

  1. Choose the type of variance reduction you want to apply by selecting one of the three Volume Correction methods: No volume correction, Affine volume correction, Indirect lognormal volume correction.

    The distribution is linearly rescaled by the square root of the variance reduction factor while maintaining the same mean value.

    This adjusts the distribution, preserving the mean and changing the variance, assuming that the distribution is lognormally distributed. The cutoffs of the distribution are replaced by cutoffs at

    where:

    cut is a cutoff level

    and a and b are parameters that Vulcan calculates to change the variance.

    The cutoffs are further adjusted to restore the mean of the distribution.

    Note:   The adjustment of the cutoffs does not change the cutoffs used for computing the indicators, only the cutoffs used for computing the distribution statistics.

    The parameters a and b are calculated as:

  1. If you select Indirect lognormal volume correction, you will need to also enter a Variance reduction factor within the range of 0 to 1. The variance of the distribution is multiplied by this factor.

    The Post Indicator process can construct a continuous distribution from the discrete distribution produced by indicator kriging. To do this, several assumptions are made that are controlled by the following parameters.

  2. Enter the minimum and maximum thresholds by the lowest and highest grade value in the spaces labelled Min value and Max value.

    For the minimum of the distribution, enter the lowest grade value in the distribution. This should be less than the lowest cutoff. Often zero (0) or the minimum detectable limit are appropriate minimum values.

    For the maximum of the distribution, enter the highest grade value in the distribution. This should be greater than the highest cutoff. Increasing this value causes higher values to be produced when selecting grade values from above the highest cutoff.

  3. Enter the left tail distribution parameter. This controls the shape of the curve from the minimum of the distribution to the lowest cutoff. If the parameter is '1', then the curve is a straight line between the minimum distribution value and the lowest cutoff. If it is less than 1, then the distribution is skewed to the left (the curve is above a straight line). If it is greater than 1, then the distribution is skewed to the right (the curve is below a straight line).

    The choices are:

    • Linear Model

    • Power Model

    Enter the middle interval distribution parameter. This is like the left tail distribution parameter, but applies to the intervals between the lowest cutoff and the highest cutoff.

    Enter the right tail distribution parameter. This is like the left tail distribution parameter, but applied to the intervals between the highest cutoff and the maximum of the distribution.

    The choices are:

    • Linear Model

    • Power Model

    • Hyperbolic Model

  1. Enter the number of samples to be used when computing distribution means in the space labelled Discretisation of the distribution. A discretisation of 20 is usually fairly reasonable. Using a lot of discretisation can result in slower computing of distribution means without producing a more meaningful or more accurate number. If discretisation of 20 is used, then the distribution is sampled at steps of 5 percent from 2.5 percent to 97.5 percent.

    Note:  This would miss the upper tail in the 98 and 99'th percentiles.

    Having defined the parameters above, the following results can be computed from the continuous distribution. The total grade result is not computed from the continuous distribution.

  2. Select the variable for the computed mean of the continuous distribution from the drop-down list labelled Mean of the distribution.

  3. From the drop-down list labelled Coefficient of variation of the distribution, select the variable for the computed coefficient of variation of the continuous distribution after variance reduction, if any, has been applied. The coefficient of variation is the standard deviation divided by the mean.

  4. Select the variable for the grade in the distribution, after variance reduction. Enter a probability in the space labelled Probability (from 0 to 1), then select the variable for the grade from the drop-down list labelled Grade value at a given probability. For example, using a probability of 0.01 causes the top 1 percent grade value to be stored in this variable.

  5. Enter a Grade Cutoff.

  6. Select the variable in which to store the Probability for grade cutoff.

  7. Select the variables in which to store the average grade in the distribution above and below the given grade cutoffs.

  8. Select the variable in which to store the total grade for the distribution. The grade is computed using the parameters below. The total grade does not depend on the parameters used to extend the discrete distribution to a continuous one. The total grade is calculated by taking the difference in indicator variables that represents the probability of being in the given interval and multiplying by the assumed grade for the given interval. The sum of these interval grades is the total grade. The indicator values will have an order relations correction applied to them before the total grade is computed.

Sample Limits

Use this panel to limit the effect a high-valued sample will have on distant blocks.

Important:  Using this option will not place a grade cap on the high-yield samples. It will prevent them from being used in the estimation completely.

The high-yield exclusion ellipsoid specified through this section of the Estimation Editor interface has its own major, semi-major, and minor axes that have the same orientation as the ellipsoid defined in the Search Region panel. When selecting the samples for the grade estimation, any samples within the high-yield exclusion ellipse can be chosen whether they are greater than or less than the threshold. However, between the smaller high-yield exclusion ellipse and the normal search region ellipse, only samples that are less than the threshold will be used.

Follow these steps:

  1. Enable the checkbox labelled Exclude distant, high yield samples to ignore sample values at or above the specified threshold and outside the specified ellipsoid.

  2. Specify the database Field for high yield samples. It is usually the same as the input grade field. However, other fields can be used to achieve special sample limits.

  3. Enter the threshold you want to use as a limit for high-yield samples. Only samples that are less than the threshold will be used.

  4. Select Use angles if you want to use a customised orientation for the search ellipsoid. If you do not use this option, the samples will be selected using the ellipsoid defined in the Search Region panel.

    Note:  When a high yield region is defined, the value used puts a threshold on the samples outside of your specified distance and angle parameters. The high yield exclusion ellipsoid specified through this section of the Estimation Editor interface has its own major, semi-major and minor axes that have the same orientation as the ellipsoid defined through the Search Region section. When selecting the samples for the grade estimation, any samples within the high yield exclusion ellipse can be chosen whether they are greater than or less than the threshold. However, between the smaller high yield exclusion ellipse and the normal Search Region ellipse, only samples that are less than the threshold can be chosen. This is commonly used to avoid estimation from high gold nugget values to distant blocks.

    Example:  Using the settings in the screenshot above, we have several high-value samples that we would still like to use in our estimation and they outside of our high value spatial limits, so we set the Threshold value to 20. Then, we set our XYZ distance and rotation. Within that bubble, the high-value samples will be used without capping. Outside of that bubble, the value will be reduced to 20.

  5. Enter the search distance into the spaces provided for Major, Semi-Major, and Minor axes radii.

    Click the Interactive button to display an ellipsoid that can be edited by manipulating it with the mouse or entering the rotation and axis parameters directly.

  6. Select Use Capping Options to set threshold values for the lower and upper grade values that will be used in your estimations.

    You can use a Fixed value or use a Block Model value.

  7. Select Do not cap grades inside High Yield region if you do not want a threshold applied to the region specified in Step 4 above.

  8. If you want to limit how much influence a single drillhole or its samples will have, there are two methods you can apply: Limit number of samples per drillhole and Limit number of drillholes per estimate.

    Note:   When you select Limit number of drillholes per estimate, both the minimum and maximum number of drillholes per estimate must be specified. If you would like to apply only one of these parameters simply set the other parameter to an appropriate value. That is, if only the minimum parameter is required set the maximum to be greater than the maximum samples per estimate, or something like 99999. Similarly, if only the maximum parameter is required set the minimum to be 0 or 1.

  9. Specify the name of the database field containing the drillhole name by selecting the field from the drop-down list labelled Field for drillhole ID. The default is DHID.

  10. Enable the option Use Key Sample Limits to restrict samples based on the criteria you enter into the table. For example, you could use 10 samples for your estimation but limit the number of samples that come from RC drilling to just two samples.

    1. Select the database field to evaluate.

    2. Select the operator.

    3. Enter a value.

    4. Enter the number of samples allowed if the condition is TRUE.

Soft Boundaries

Use this panel to setup soft boundaries. Sometimes samples that are at a different estimation domain share similar grade properties when they are located close to the limit between the domains. This is often known as a soft boundary between the domains. In this case, it could be useful to include samples from a different domain but only at short distances from the blocks in the current domain.

Follow these steps:

  1. Select this checkbox labelled Use soft boundaries to make the rest of the panel available.

  2. Enter or select from the Soft boundary drop-down list the variable that will be used as a criteria to identify the estimation domains. The list is populated with the names of the database fields.

  3. If you want block model variables to be used as radii for restricted search ellipsoid or the bearing, plunge, and dip values of the restricted search region, click the option labelled Use block model variables. All the block model variables will populate the drop-down list of the respective fields.

    Explanation of table headings:

    Note:   If the sample selection criteria does not allow for samples in soft boundaries domains to be selected, then no samples will be applied to the definitions specified here.

  4. Click the Interactive button to display the Edit Interactive Ellipsoid panel define search ellipsoids on screen in Vulcan.  Once defined in Vulcan, the parameters will then be written back into the panel.

    Note:  Clicking on a row to highlight it will enable the Interactive button.

Block Options

Use this panel to set up various block options.

Available options.

Note:  During the grade estimation process, any blocks that have been selected but don't get an estimation value will receive the grade estimation default value. All blocks that have not been selected will retain their block creation default value. As a result, you could end up with two default values within the block model. If block selection criteria has not been specified, i.e. all blocks are selected, then the blocks not receiving an estimated value will have their block creation default value overwritten with the grade estimation default value, that is, one default value exists.

Localisation Strategy

Use this panel to set the parameters for the estimation run. You will need to select the localisation strategy, seed, and number of iterations of the estimation.

  1. Select the localisation strategy from the drop-down list labelled Strategy. There are four types to choose from:

  2. Enter a Seed. If you selected Minimize Entropy or Randomly as your strategy, you can enter your own number or click the Generate button to produce a random seed.

    Note:  A seed is not used with the Rank by SMU Estimate or Secondary Data strategies.

  3. Enter the number of simulations in the space labelled Iterations.

    Note:  This is only used if you selected Minimize Entropy as your strategy.

  4. If you selected Secondary Data as your strategy, you will need to select a variable from your block model from the drop-down list labelled Variable.

Uniform Conditioning

Gammabar refers to calculating the variance within the blocks.

  1. Enter the number of SMU discretisation steps for each direction. If the SMU represents a bench, then enter a 1 for the Z value.

    Note

    The variance is obtained by discretising the block and calculating the average variogram value for all possible pairs. The X, Y, Z discretisation parameters control the number of pairs to consider. However, evaluating that integral directly is extraordinarily difficult, if not impossible. Therefore the variance within blocks is obtained by discretising the block with some number of sample points, and calculating the average covariance for all possible pairs within the block. The number of discretisation points can effect the final value slightly, but in practice 10 x 10 x 10 is generally accepted. (Too many discretisation points could lead to numerical precision issues; too few will give a poor estimate).

    For uniform conditioning, we require the variance within both the SMU sized blocks and the panels. Therefore, you must specify the discretisation for both. In general the 10x10x10 does not need to be changed. One time you might change them is when doing 2d uniform conditioning (such as a single bench).

  2. Enter the number of Panel discretisation steps for each direction.

    Note:   The panel are defined in the Panel Definition pane by one of three methods: manually defining the X,Y,Z inputs; Triangulated panel method; and Zone method. If you selected triangulated or zone methods then discretisation will not be available.

  3. Select the Variance Correction Factors you wish to use. The variance correction factor (f) is the ratio of the block variance to the original sample variance.

    There are three options for sample variance:

    • Variogram sill - The default is to use the variogram sill, this is likely the correct choice. Recall that the dispersion variance (gammabar) is calculated with the variogram originally.

    • Sample variance - Use the (weighted) sample variance of the original samples.

    • Calculated variance - Use the variance calculated from fitting the sample anamorphosis (this is calculated from the hermite polynomials).

  1. Set the parameters for the Normal Score Transformation.

    1. Enter the minimum and maximum values in the spaces provided. These are the minimum and maximum values that a node can take when back transforming Gaussian values to grade units.

    2. Select the type of model to use for the left and right tails.

      The left tails attribute controls the shape between the lowest value in the transformation table and the minimum of the distribution. Linear and Power methods are displayed in the drop-down list.

      The right tails attribute controls the shape between the highest value in the transformation table and the maximum of the distribution. Linear, Power and Hyperbolic methods are displayed in the drop-down list.

    3. Enter the Lower w value and Upper w value in the space provided. A w value less than 1 leads to positive skewness, while a w value greater than 1 leads to negative skewness and a value equal to 1 is equivalent to a linear interpolation.

    4. You also have the option of using a declustering weight. If you select this option, define the field used for the weights.

  1. Set the Number of Hermite polynomials. By default this is set at 100, and should not be changed unless absolutely necessary.

Extra Variables

This panel allows you to apply the same weights to several sample variables.

Example:  If you have several different metals that follow the same variography, then you may want to use the same weights for each of your metal variables.

Note:   Your original sample variable and block model grade variable will always appear in this list.

Note:  This panel is only available if you are using inverse distance, simple kriging, or ordinary kriging. It is not available if you are using indicator kriging or uniform conditioning.

Follow these steps:

  1. Enable the checkbox labelled Estimate multiple variables.

  2. Select the variable to which you want to apply the weighting from the Input Variable drop-down list.

  3. By default, the values from Min Value, Min Samples, and Zero Missing are automatically copied over from the original variable. However, you can edit these by replacing the values in the cells.

  4. Select the variable where you want to store the results by picking from the Output Variable drop-down list.

Explanation of table headings