Skip to content
styk-tv edited this page Nov 18, 2012 · 14 revisions

To set up a node you need three things: VMPlayer form VMware also debian-amd64-netinst.iso. Once you have the vm ready log in as root and install the node from package or git (see below for installation instructions) and you are ready to try out the API. To get optimal results we suggest setting up 3-4 nodes on the network initially. If you use ESXi v4.1, you can host several nodes on a single PC. Easy for initial setup and testing. Once you need more processing power, you can just add another node on the network, and instruct the queue to include it in the processing of queued jobs.

We will make a virtual machine available for download very soon. For now if above instructions are not clear just ask. To start, stop or restart use the following command. It is started automatically.

 /etc/init.d/stock-footage-node start

Table of Contents

Node Installation

Install Debian 6.0.1 or later minimal installation. I tested on 64bit.

Installation from git (development)

install git client:

 apt-get install git

clone repository from github:

 git clone git://github.com/styk-tv/Cloud-Media-Encoder.git

go to Cloud-Media-Encoder and as root execute:

 bash install.sh

Cloned git repository will be moved to /home/node/styk.tv

Updating development version

Go to directory /home/node/styk.tv and run update.sh. Configuration files (stores, local stores and encoders) are preserved.

Uninstalling development version

Go to directory /home/node/styk.tv and as root run 'bash uninstall.sh'

Setup Storage =

Before you do anything else, setup storage for your footage files. Your basic node installation is runs all the node software but your files are processed on storage you attach to your node. In VMware this is as simple as adding a new virtual disk to your os.

fdisk.py =

When you run fdisk.py on your node, new disk will be show on list below. New attached disk on highlighted line and shows as empty.

Example output:

<?xml version="1.0" ?>
<fdisk>
 	<disk bytes="8589934592" device="/dev/sda" diskId="0x000e4ae9" type="other">
 		<partition blocks="7993344" boot="true" device="/dev/sda1" end="15988735" id="83" start="2048" system="Linux"/>
 		<partition blocks="392193" boot="false" device="/dev/sda2" end="16775167" id="5" start="15990782" system="Extended"/>
		<partition blocks="392192" boot="false" device="/dev/sda5" end="16775167" id="82" start="15990784" system="Linux swap / Solaris"/>
 	</disk>
	<disk bytes="53687091200" device="/dev/sdb" diskId="0x00000000" type="other" uuid="42e88cc3eb5346a3abc07e8ae398a52f"/>
</fdisk>

prepare.py =

To format, mount and prepare an empty disk you should run command below. Where <device></device> could be "/dev/sdb" as in example of empty disk above. Below command will only be executed if disk is marked as empty. Otherwise any action is aborted.

prepare.py <device>

If your disk showed up as node-data you can force mount using command below. Your existing stores will be then imported and become ready. If you had vhosts set up on the stores of the disk you're adding, make sure you either change the vhost names or update your dns to resolve.

prepare.py <device> -f

Storage Types

There are several types of storage. Each represents a unique capability of the node. For example a store can only be an assets_sample host for previewing thumbnails and watermarked flv. Or the host can be as complex as performing encoding in&out for vod_player. A node processing a queue can encounter any of these stores and associated tasks. Each one of your local network hosts knows information about capability and function set of all processing nodes. It is also a path structure with its own security so you can decide which jobs to publish and which ones to hide. The types are as follows:

Encoder Input [EI]- Before files can be processed by the encoder, they should be placed in the Encoder Input folder where they wait for the queue to process them. Files on EI store are not accessible through http.

Encoder Output [EO]- Output to this store if you want your files to be available on http right after.

Hidden Output [HI]- Output to this store if you want to hide files after processing. Usage Example: Perfect if you created a version of an original file with lower resolution but do not want to make it available to everyone. Works great with combination of next type.

Linked Output [LI]- This is only a pointer to file in any store like a shortcut. They are visible on http and have expiry date. Once the date on a link expires it stops pointing to its source. Usage Example: You sold a file that is a render of an original file sitting in EI store. Your render can go to HI and be hidden and a user in the email gets a path to LI. You could for example set expiry to 3 days. This LI link will be a standard http link it will point to a file in HI and will expire and no longer work in 3 days.

Stores.xml

Entire storage of media in your network is defined using Stores.xml. Your storage is divided into individual StoreTypes. Single store may for example represent type EI which stands for Encoder Input folder. Example of stores below on a single disk. Just to repeat EI and HI are not available on http but EO and LI are.

<stores>
     <disk guid="63E151C961A84E208157F8CDDB8AB424" host="192.168.1.211" dateProbed="2011-04-20T00:01:00">
          <store guid="B57378404D3A4105A8FF3222A47224EE" type="EI" />
          <store guid="2CEC0552EC60481C9FEF44831A8F6A19" type="EO" />
          <store guid="FE8B031EA8304965BC2987044F56E3F7" type="LI" />
          <store guid="BB6870B0E15245ED951ED547159D2E91" type="HI" />
     </disk>
</stores>

Actual media is never stored on primary partition but only through the stores. On the actual node disk the Store is always in the following format:

 /var/www/volumes/DISK_GUID/STORE_GUID/1/2/AI_GUID/AI_GUID.EXT

Where AI_GUID is a guid of an asset item and 1 and 2 are the first and second letter of the guid (this prevents you from having hundreds of folders in your root store folder)

The only exception to the path above is the PR type where filename of the preview flv and filenames of all the video thumbnails can be constructed using only two parameters: AssetItemGUID[AI_GUID] and ThumbnailCount[th]

   /var/www/volumes/DISK_GUID/STORE_GUID/1/2/AI_GUID/preview_AI_GUID.flv
   /var/www/volumes/DISK_GUID/STORE_GUID/1/2/AI_GUID/th_AI_GUID_0.jpg
   /var/www/volumes/DISK_GUID/STORE_GUID/1/2/AI_GUID/th_AI_GUID_1.jpg
   /var/www/volumes/DISK_GUID/STORE_GUID/1/2/AI_GUID/th_AI_GUID_2.jpg

Stores management

stores.py tool can be used to manage local stores. Available commands:

 list - returns XML with list of local stores, their status (mounted or not, published or not)
 create <disk uuid> <store type> - creates new store on specified disk and returns uuid
 delete <store uuid> - deletes store and all contents (if mounted)
 publish <store uuid> <domain name> <port> [<redirect404>]- publishes a store on www server with given domain name. It is possible to optionally provide full URL to use instead of standard 404 error page
 unpublish <store uuid> - stops publishing a store on www server

List of local stores is kept in /opt/node/etc/LocalStores.xml . This file is considered authoritative source of information about local stores and takes precedence before Stores.xml.

List of remote stores in Stores.xml can be managed in similar way by remotestores.py:

 list - returns XML with list of local stores, their status (mounted or not, published or not)
 create <disk uuid> <store uuid> <store type> - adds remote store with uuid on specified disk 
 remove <store uuid> - deletes store from list
 createDisk <disk uuid> <host> - adds remote disk 
 removeDisk <disk uuid> - removes remote disk

Setup vhosts

LocalStores.xml also keeps information about mapping http1.1 hostname and ports to a Store Type Guid on your node. It is used to create configuration files for Nginx in /etc/nginx/sites-enabled.

You can bind any of your stores to inside or outside network When we bind Http vhost with a STORE_GUID we get the following access path over http(example):

 http://store221.styk.tv/3/2/32441B1739AD426DAA39C15936B1D4D1/th_32441B1739AD426DAA39C15936B1D4D1_1.jpg

NEXT: Encoders Setup