Skip to content

Basic_Project_export

Sergejs edited this page Aug 27, 2022 · 3 revisions

HRTF Project Export

The 1st point where actual Mesh2HRTF project software enters the workflow :)

------------------------------------------------------------ Part of the the Complete Beginner’s tutorial ------------------------------------------------------------
Previous <<< 3D mesh optimization last updated in 2022-08-27 Next >>> The HRTF Simulation

Video version of this tutorial:


Overview

The main steps to export the project:

  1. After the 3D mesh optimization tutorial steps the starting point in Blender is: two correctly positioned final meshes for the Left and Right ear.

  2. Depending on the choice of “Source Type” there are 2 alternative workflows:

    1. Default recommendation is Vibrating Element source.

    2. Alternative approach is Point source.

  3. Final export from Blender - as a result two simulation projects must be created (one for Left and one for Right ear).

    1. Fast method - for the least amount of effort and excellent results.
    2. Regular method - for all custom use-cases.

Note: “Source == in-ear microphone” – in Mesh2HRTF audio “Source” usually represents in-ear microphone locations. The reason microphone is called “Source” is because the HRTF simulation actually works backwards: it assumes that the sound originates from the ear and then measures how this sound would arrive at defined points on the Simulation grid (at the locations of virtual speakers around the head).


alt 1- default Vibrating Element source

Follow these steps if you will use the recommended Left ear and Right ear as “Source type”. Vibrating element method is verified in detail by Ziegelwanger et al. 2015) (C).

  1. It is necessary to use the 3 special materials - "Skin", "Left ear", "Right ear":

    1. Select one of the imported meshes and hide all other meshes that get in the way

    2. Go to “Material Properties” tab (with the right object is selected) - see the illustration "1".

    3. Add 3 material slots and assign the materials to them (even for mesh where one ear material is not assigned to any polygon!):

      • Choose “Skin” material for the 1st material slot (all mesh should now change color to show the “Skin” material)

      • Choose "Left ear" and "Right ear" materials for the 2nd and 3rd material slots.

    4. Go into “Edit mode” (Tab) and “Face select” (3) to select one triangle that best represents the blocked ear canal microphone position.

    5. Assign this triangle the 2nd material slot with "Left ear" or "Right ear" material

    6. Press (Tab) to exit “Edit mode”

    7. Repeat steps 1-6 to fix materials for the other ear side.

    8. Save the Blender file (can be useful).

  2. Export the mesh (choose either the Fast or Regular way:):

    • The Fast way (using custom made Blender script suitable for most HRTF simulations):

      1. Go to Fast method of the Final export from Blender
    • The Regular way (universal approach):

      1. Rename (or delete) the “Reference” object to something else (this object name specifies which mesh is used by the Mesh2HRTF Blender exporter)

      2. Select the mesh that you will export (note, exporting will be done two times)

      3. Duplicate the mesh: While in Object mode (Tab to switch modes) with the mouse over the “3D view” press (Shift-D) to start duplicating object and then (Right-click) mouse to confirm duplication without changing the object position. Now you will see 2 overlapping opbjects.

      4. Rename the duplicate object to “Reference”.

      5. Now perform Regular Final export from Blender for this ear and this Source type. (do not mix up left and right ears!)

      6. Repeat steps 1-5 to export project for the other ear side.

Note – the “Point source” light object can be ignored – it will not be used.


alt 2- alternative Point source

Follow these steps If you will use the alternative Point Source “Source type” method. This method should in theory give the same results as vibrating element source, but is a bit less tested.

  1. There is no requirement to assign any materials for this method – the mesh can stay without any Material slots.

  2. Ear position is determined by the position of the “Point source” light object (as we need to export meshes for each ear, it is best to create 2 objects for each ear and then assign the required “Point source” name to the correct object just before export):

    1. Select one of the imported meshes to export next and hide all other meshes that get in the way

    2. Duplicate the mesh: While in Object mode (Tab to switch modes) with the mouse over the “3D view” press (Shift-D) to start duplicating object and then (Right-click) mouse to confirm duplication without changing the object position. Now you will see 2 overlapping objects.

    3. Rename the duplicate object to “Reference”.

    4. Go into “Edit mode” (Tab) and “Vertex select” (1) to select one vertex that best represents the blocked ear canal microphone position.

    5. Press (Shift+S) and (3) to snap the 3D cursor to the selected vertex.

    6. Select the “Point source” light object that needs to be placed into the ear (in Object mode – use Tab to switch).

    7. Press (Shift+S) and (8) to move the selected object to 3D cursor location.

    8. Then directly edit the Location Y-coordinate and add ~0.3 mm extra offset from the mesh, to make sure the “Point source” stays outside the ear during simulation ([source](Most common errors)).

    9. Use "Save As" to save the Blender file before trying to export (can be useful).

    10. Now perform Regular Final export from Blender for this ear and this Source type.

    11. Repeat steps 1-10 for the other ear with another Point source object.


Final export from Blender

A - Fast method

This method may look unusual, but actually eliminates a lot of manual work and protects user from many potential mistakes - highly recommended to use!

  1. Find the Python Console in Blender (it is available on the "Scripting" tab or can be opened where needed.)

  2. Modify the paths (in any text editor) and then copy-paste & run the following code into the Blender's Python Console. (This code is already using the recommended settings for the final quality simulation. + You are also welcome to check and edit the Project Export settings as desired, especially for quick test-runs.):


import os     # __This script automatically exports both LEFT and RIGHT ear projects using vibrating element source.__
import bpy    # to be executed in Blender Only. Uses "exportMesh2HRTF.py".   (First version made by S.D. in 2022-03)

# Modify the paths and names!
mesh_Names = ['3Dmesh_graded_left', '3Dmesh_graded_right']  # the Left and Right Blender objects to export
exportNewFolder_Names = ['myHRTF_project_L', 'myHRTF_project_R']  # your desired Project Folder names
export_Path = r'C:\mesh2hrtf-tools'  # warning, do not include "\" character in the end of a path!
mesh2hrtf_Path = r'C:\Mesh2HRTF\mesh2hrtf'  # warning, do not include "\" character in the end of a path!
ears_list = ['Left ear', 'Right ear']  # all Vibrating elements (nothing to change)
###  (note some project export settings are adjusted at the end of this script).

# --------------------------------------- DATA CHECKS --------------------------------------------------
# making sure that project folders do not already exist
if os.path.isdir(os.path.join(export_Path, exportNewFolder_Names[0])) or os.path.isdir(os.path.join(export_Path, exportNewFolder_Names[1])):
    ears_list = []  # the only way to stop Blender from executing everything
    raise ValueError("Project folder " + os.path.join(export_Path, exportNewFolder_Names[0]) + " already exists. - Choose another folder or delete files.")

# making sure that the objects-to-export were not duplicated previously and left un-renamed.
for obj in bpy.context.scene.objects[:]:
    if obj.type == 'MESH' and (obj.name == (mesh_Names[0]+'.001') or obj.name == (mesh_Names[1]+'.001')):
        ears_list = []
        raise ValueError('You Have-to rename the objects "' + mesh_Names[0]+'.001" and "' + mesh_Names[1] + '.001" to some other names to continue.')



# ------------------------------------ MAIN EXPORTING CODE -----------------------------------------------
for e_nr in range(len(ears_list)):  # loop for both ears
    try:  # Switch to object mode to avoid export errors
        bpy.ops.object.mode_set(mode='OBJECT', toggle=False)
    except:
        pass  # the command crashes if the 3D view was never selected in Blender - not a problem
    #
    # check if 'Reference' object exists and RENAME it to preserve it as backup.
    bpy.ops.object.select_all(action='DESELECT')  # de-select all objects
    for obj in bpy.context.scene.objects[:]:
        if obj.type == 'MESH' and obj.name == 'Reference':
            bpy.data.objects['Reference'].select_set(True)  # select 'Reference'
            obj.name = 'bckp_Reference_' + ears_list[e_nr]  # rename the object for backup
            break
    #
    # select, activate, duplicate & rename the object to export
    bpy.ops.object.select_all(action='DESELECT')  # de-select all objects
    bpy.context.view_layer.objects.active = bpy.data.objects[mesh_Names[e_nr]]   # activate
    bpy.context.object.hide_set(False)  # un-hide if object was hidden (necessary for duplicating)
    bpy.data.objects[mesh_Names[e_nr]].select_set(True)  # select
    bpy.ops.object.duplicate(linked=False)  # duplicate
    bpy.context.selected_objects[0].name = 'Reference'  # rename the object for backup
    #
    # save Mesh2HRTF project ----------------------------------
    bpy.ops.mesh2input.inp(
        filepath=os.path.join(export_Path, exportNewFolder_Names[e_nr]),
        programPath=mesh2hrtf_Path,
        sourceType=ears_list[e_nr],
        minFrequency=0,
        maxFrequency=24000,
        frequencyVectorType='Step size',
        frequencyVectorValue=150,         # <== consider adjusting ("The frequency settings" tutorial)
        evaluationGrids='Default; ARI',   # <== consider adjusting ("Evaluation Grids" tutorial)
        materialSearchPaths='None',
        pictures=False,
        reference=True,
        computeHRIRs=True,
        method='ML-FMM BEM',
        unit='mm',
        speedOfSound='343',
        densityOfMedium='1.1839')
# now Hit Enter, two times!   :)

Done (you should now have 2 project folders created in your folder).

**Troubleshooting: **

  • The provided example code is optimized for this tutorial and is made specifically to simulate HRTF using Vibrating-element method each ear separately after using "hrtf_mesh_grading". If you try to apply this method to other cases, more extensive code editing may be necessary.

  • Scroll up in the Blender Python Console to see if there are any errors. The errors may not be visible if you run a long script in one go.

B - Regular method

This is the universal approach by using the Exporter menu. Great for trying out custom settings and unusual use-cases of Mesh2HRTF.

  1. Open the Mesh2HRTF Export Menu (File -> Export -> Mesh2HRTF)

  2. Manually fill in all Project Export settings for the correct ear and press the "Export" button.

Example image! - should NOT be used as export setting reference.

**Troubleshooting: **

  • In case File=>Export=>Mesh2HRTF is greyed out and can not be selected – make sure you have “Reference” object selected in Blender.

  • If you get any errors – try to read the end of the error message before clicking on anything (or the message can disappear). Most export errors are caused by simple typos and other easy to fix mistakes. In more complex cases you can check the [Most common errors](Most common errors) and [(older) Most common errors](Most common errors_0.4.0) before attempting debugging or reporting an issue.

    • If there was an error, the newly created half-empty project folder must be manually deleted before the same project name can be used again.
  • Both left and right project must have the same export settings,

  • Project folder names do not matter – use what you want (but preferably without spaces).


Project Export settings

The settings recommended here are just an example – you are welcome to make your own adjustments. Differences from Blender exporter defaults are highlighted in bold. (The same settings are already included in the script for the Fast method.)

  • Note - this is a general recommendation for the FINAL simulation. For initial test it is good to increase Frequency step size to for example 600 or even 1200. (To make rough simulation even faster, it is also possible to reduce the 3D mesh resolution using coarser mesh grading settings)
Setting Value comment
Title leave default (not used anywhere)
BEM Method default is OK ("ML-FMM BEM" default is optimal)
Ear Left ear & Right ear (export 2x projects for each Vibrating Element).
Mesh2HRTF-path provide correct path (see tooltip)
Pictures not important (just renders input mesh pictures)
Reference True (important)
Compute HRIRs True (important)
Unit leave default ("mm")
c default is OK (speed of sound "343.18")
rho default is OK (air density "1.1839")
Evaluation Grids Default; ARI ("ARI" is a good start, but try custom and several grids at once)
Materials Path(s) leave default (for advanced cases only)
Min. frequency 0 (Zero is always a safe choice)
Max. frequency 24000 (necessary to produce 48kHz .sofa file)
Frequencies Step size (both options are used)
Value 150 (one possible recommendation - see below for more info)

For all details about each setting please read: [Mesh2HRTF Export Parameters](Mesh2HRTF Export Parameters)

The frequency settings

Configuration to simulate both 48kHz and 44.1kHz SOFA files in one go. It is common that HRTF-related software does not perform internal sample rate conversions and demands that all inputs are provided in matching sample rate. Then it is convenient to have SOFA HRIR files available in all common sampling rates (in practice 48kHz and 44.1kHz is sufficient for 99.5% of cases). To achieve this, the recommended settings are using a frequency step that allows to export SOFA files in all common sampling rates using results of a single simulation.

Recommended example frequency settings (note, there is never any reason to change the Min. frequency = 0):

Use Case Frequency Step Resulting Number of Steps Total nr. of simulations up to 24kHz
Main recommendation: Good quality and provides both 48kHz and 44.1kHz HRTF files. 150 Hz 160 160+160
Double the resolution mode (also for both 48kHz and 44.1kHz) 75 Hz 320 320+320
Commonly used by researchers (ARI database), good HRTF quality (but does not work for multi-sampling rate). 187.5 Hz 128 128+128

For deeper understanding:

  • Step Size = 150Hzdoes NOT mean that simulation has no information below 150Hz! The way the simulations and sampling works is quite counter-intuitive, but even if a certain frequency is not explicitly simulated, it can still be accurately reflected in the results from 0 Hz up to the "Max. frequency".

  • Max. frequency indirectly defines the Sample rate of the resulting HRTF. Meaning that if you want to natively convolve the HRTF with audio recording in common sampling rates of 44.1kHz the simulation must include the value of 22050 Hz. For 48kHz the simulation must cover 24000 Hz.

    • For quick tests it is better to leave Max. frequency at the intended values (for example 24000 Hz) but instead increase the Step size to perhaps 600 or even 1200 to just run a test simulation with a very low "Number of Steps" (just to see that everything works).
  • For Max. frequency simulations over 24000 Hz consider that:

    • As frequency increases the demands for the simulation mesh smoothness increase and the typical recommended re-meshing settings may be too coarse to avoid "Non-Convergence issue" - cases when simulation fails to find a robust solution for a given frequency. Therefore at frequencies >24kHz there is a significant risk that the simulation will fail to compute and you will not be able to use the extra-high frequency data.

    • There is plenty of reliable research that proves that even 44.1kHz is perfectly sufficient sampling rate for normal human hearing, therefore apart from High-Rez audio marketing reasons, there is no advantage of listening to audio at higher than 48kHz sampling rate (so the difficulties in simulating higher sampling rate HRTFs is not a significant issue).

    • Real-time convolution to use HRTFs at unnecessarily-high sampling rates (96 kHz or higher) costs a lot of extra processing resources, again for no practical reason.

  • To produce valid SOFA HRTF files, the frequency Step size must be constant. Therefore it is not possible to introduce "optimizations" to the frequency step to reduce the sampling density at higher frequencies.

  • For multi-sampling rate SOFA files the finalize_HRTF_simulation.py script recognizes only 150Hz or 75Hz as valid "Step sizes" and will try to export HRIR files in the following common sampling rates: 192000, 96000, 88200, 48000, 44100.

For advanced use it can be important to understand and customize sampling grids. When the simulation is complete, NumCalc solver outputs data for every point in space that is defined in the Evaluation Grid. These points are the virtual sound sources of the HRTF and the method allows for practically unlimited number of virtual sound sources in an HRTF. In contrast when HRTF is measured using in-ear microphones for each sound source location an actual loudspeaker must be placed in the specified position and a measurement is taken which directly impacts the overall HRTF measurement duration (and there is a practical limit how long can a human sit in an HRTF measurement session).

  • A new evaluation grid for Mesh2HRTF can be created in Blender , Python or Matlab (link to the code for all 3 approaches).

  • It is possible to simulate multiple Evaluation grids in one simulation with minimal performance penalty. If an Evaluation grid is located in ../mesh2hrtf-git/mesh2hrtf/Mesh2Input/EvaluationsGrids/ then it is enough to specify just the name of the grid, otherwise an absolute path to the custom grid is needed. Multiple Evaluation grid names can be separated by semicolons (;) when using Regular export interface.

  • By customizing sampling grids it is possible to:

    • Adjust the distance of the sound sources (normally HRTF depends on the angle only - the distance is fixed).

    • Adjust density of sound sources around the most common speaker locations (to maximize accuracy of 7.1 surround sound when using headtracking).

    • Simulate just a few specific speaker positions - can be useful to minimize SOFA file size if the simulation will only be used with 2.0 or 7.1 surround speaker virtualization without headtracking.

  • Some additional information is available the Evaluation grids Wiki page.

More information and recommendations on specific Evaluation Grids may be added at a later point.

More Tips

  • When using Regular export method, notice that there are useful tool-tips for every field in Mesh2HRTF exporter (when mouse hoovers over an entry field).

  • Some HRTF simulation services (and most consumer HRTF personalization services) only simulate one ear and mirror the result to the other side. Mesh2HRTF does not have ready-made scripts for this because it is not a harmless simplification. Human faces are not symmetrical and using simulation from one ear for both sides may become noticeable at higher frequencies (>5kHz).

  • Ear = Both ears - it is possible to export a single project that will simulate both ears using “Vibrating element” . The main reason this approach is usually not used - it requires equally detailed 3D mesh for both sides and therefore is very inefficient in terms of memory usage and overall simulation time. Note: currently “finalize_HRTF_simulation.py” offers several very useful features for SOFA file export and it does not work on simulation results from "Ear = Both ears" projects.

Picture rendering

If you choose to use “Pictures = True” export option, Blender exporter will automatically render a few images of the mesh you are exporting and save them in folder /Pictures/ inside the project folder. For picture rendering to work as intended check that:

  1. There must be a “Camera” object and “Light” point light objects in the scene. Besides check the screenshot – the naming must be set correctly in 2 places:
  1. Positions and properties of the Camera and Light are not relevant.

  2. Make sure you delete or mark invisible during rendering all objects that you do not want to see in the rendering! Normal Blender object visibility controls apply.


Next tutorial step >>> The HRTF Simulation