Skip to content

ChrisPGraphics/ByExampleSynthesisOfVectorTextures

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

By-Example Synthesis of Vector Textures

The official code repository for the paper "By-Example Synthesis of Vector Textures" by Christopher Palazzolo, Oliver van Kaick, and David Mould.

[Project Page] [Vector Interpolation Video] [Results Repository]

Installation

First, clone the repository to your local machine.

git clone https://github.com/ChrisPGraphics/ByExampleSynthesisOfVectorTextures
cd ByExampleSynthesisOfVectorTextures

From here, you can either install the repository Manually or with a Docker Container. We also provide several converted exemplars and synthesised results in a few different formats at our Results Repository. This allows for easy prototyping without needing to first install our code.

Manual Installation Instructions

Assuming you have Python installed (we use Python 3.11.5), follow these instructions to prepare your local machine to use our code.

pip install -r requirements.txt

If you are using a Linux-based OS, you may need to also run the following commands to install dependencies for Open-CV2.

apt-get update
apt-get install ffmpeg libsm6 libxext6 -y

Finally, install the Segment Anything Model.

You will need to create a directory called sam_checkpoints in the repository and download the default or vit_h model to that location. The model can be downloaded here.

Now that installation is complete, you can proceed to the Vector Texture Synthesis section.

Docker Install Instructions

Simply run the following commands. SAM and PyTorch will be downloaded and installed for the CPU.

docker build -t vector_textures .
docker run -it vector_textures /bin/bash

Now that your Docker container is built and running, you can proceed to the Vector Texture Synthesis section.

Vector Texture Synthesis

Our process contains two steps. First is Analysis which converts a natural image into our vector hierarchy format. Once that is complete, Synthesis can be run to create a novel image.

We also provide the code for several other operations which were showcased in the paper. Their instructions are in the following subsections.

  • Weight Optimizer - Fine-tunes the weights used during synthesis to improve results of a specific exemplar
  • Interpolation - Generates frames that interpolate between two results generated by our algorthm using our vector interpolation. The interpolation video can be found on YouTube.
  • Adaptive Density - Increases the spacing between textons and then fills in the space with new textons to once again match the exemplar density
  • Result Editor - Performs one or many vector edits on a texture result, many of which are seen in our paper.
  • Joint Synthesis - Allows two or more exemplars to be used when synthesizing a result based on a provided probability map.
  • Exporter - Exports a converted exemplar or synthetic result to a format that could be used by other applications. Formats include JSON, XML and SVG. The schema directory contains the XSD and JSON schema files that describe the output format.

Finally, we provide a few exemplars in the exemplars directory which were used to generate results in our paper.

Analysis

To analyze an image, run following command.

python analyze_texture.py [path/to/exemplar.png]

Or use the following to display the help text.

python analyze_texture.py -h

In cases when the exemplar is rendered or has a clear number of texton types (like the circles image in Figure 10c), the --texton_clusters flag can be used to improve results. In the circles image, we set --texton_clusters=2 since there are two types of textons (large yellow and small red circles).

Synthesis

To synthesize an analyzed image, use the following command

python synthesize_texture.py [exemplar]

NOTE: For this script, you do not provide the path to the image nor the directory in the intermediate folder. Simply provide the base name of the image without file extension. For example, if the image path is /foo/bar.png and you analyzed the image with python analyze_texture.py /foo/bar.png, you would call this script with python synthesize_texture.py bar

Or use the following to display the help text.

python synthesize_texture.py -h

In cases when there is an expected structure in the exemplar (like the circles image in Figure 10c), the --improvement_steps flag can be increased. In the circles image, we set --improvement_steps=20 instead of the default 5.

Weight Optimizer

An optimizer can be used to improve results by fine-tining the weights used during synthesis. Textures that are fully or almost fully covered by textons benefit the most from this process.

After this is complete, when the exemplar is used to synthesize result, these weights by default. To ignore these fine-tuned weights and use the default set, add the --default_weights flag to the synthesis script. This script will only overwrite the previously found weights if its final score is better than what was previously found. This way it can be run multiple times to help escape local minima. It is recommended to use --skip_density_correction flag on results synthesized with optimized weights.

python optimize_weights.py [exemplar]

Or use the following to display the help text.

python optimize_weights.py -h

Interpolation

This script performs vector interpolation between a source and destination texture that was synthesized using our code. initial_texture and final_texture are provided in the same way the exemplar is given for synthesis. frames is the number of frames to interpolate (not including the first and last frame) save_path is the path to the directory where each frame will be saved. The filename of each frame be [frame_number].png where frame_number is padded with three zeros.

python interpolation.py [initial_texture] [final_texture] [frames] [save_path]

The following displays the help text.

python interpolation.py -h

Adaptive Density

Increases the spacing between textons of a result and then fills in the gaps with new textons to match the exemplar density. The result is saved to output/adaptive_density This is performed with the following command:

python adaptive_density.py [exemplar]

Or use the following to display the help text.

python adaptive_density.py -h

Result Editor

The result editor allows edits to be performed on our vector results that would be more difficult to apply to a pure raster image.

To control the behavior of the script, the Python file edit_result.py needs to be edited directly.

The lines to note are the following (near the end of the file):

import edit_operations

exemplar = "rust_1"

edits = [
    edit_operations.SmallTextonRemoval(20)
]

exemplar is the name of the result to edit, and edits is a list of edit operations. We provide several, some of which are used in the main paper, however, creating new ones can be done by extending the BaseEditOperation class in the edit_operations package. In the provided example, edit_operations.SmallTextonRemoval(20) will remove all textons smaller than 20px in area.

The result of this code will be saved in the output directory and will be called result_[operations].png, where [operations] is an underscore seperated list of the operations applied.

Joint Synthesis

Joint synthesis allows two or more exemplars to be synthesized using a probability map.

To control the behavior of the script, the Python file joint_synthesis.py needs to be edited directly.

The lines to note are the following (near the end of the file):

import synthesis

exemplars = ["concrete_4", "flowers_1", "tile_1"]
source_map = synthesis.source_map.MultiSourceMap([
    "source_maps/pg_background.png",
    "source_maps/pg_p.png",
    "source_maps/pg_g.png",
])
result_size = (1000, 515)

The source_map is an object that extends the SourceMap class. It takes images that define the probability distribution of each exemplar being sampled when any element is required at a location. The result_size should be no larger than any of the given source maps.

We provide the three source maps in the source_maps directory which were used to create the PacificGraphics logo seen in the paper.

Exporter

This script exports either a converted exemplar or synthetic result into a wide range of formats to be used by other programs. This was created because the default is to serialize the object using pickle. This makes using our results in other programs (potentially created in different programming languages) very difficult. This script bridges the gap.

Currently, supported formats are XML, JSON, SVG and several raster formats (like PNG, JPG, PDF). This is done with the following command.

python export_result.py [input_path] [output_path]

input_path is the path to the directory (not the name of the exemplar like in the other scripts) to convert. This can either be in the intermediate or output directory. Both converted exemplars and synthetic results are interchangeable here. output_path is the path to the file where the result will be written. Unless explicitly set with the --format flag, the export format will be inferred from the extension of this file.

The following displays the help text.

python export_result.py -h

Evaluation Code

Thank you for checking out our paper and repository. If you have any problems, feel free to visit the Issue Tracker for the repository.

About

The official code repository for the paper "By-Example Synthesis of Vector Textures"

Topics

Resources

License

Stars

Watchers

Forks