This release adds support for Python v3.13.
- Added support for Python 3.13
- Added new IntelligentCross venues
ASPN
,ASMT
, andASPI
- Upgraded
databento-dbn
to 0.23.1- Fixed
pretty_activation
getter indatabento_dbn
returningexpiration
instead - Fixed some
pretty_
getters indatabento_dbn
didn't correctly handleUNDEF_PRICE
- Fixed
- Deprecated
packaging
parameter forHistorical.batch.submit_job
which will be removed in a future release
- Improved exception messages emitted by the
Live
client to always include contents of anyErrorMsg
sent by the gateway
- Fixed an issue where calling
Live.stop
would not close the connection within a reasonable time
- Removed deprecated
databento.from_dbn
;databento.read_dbn
can be used instead - Upgraded
databento-dbn
to 0.23.0
- Fixed an issue where
DBNStore.request_symbology
could request the wrong end date
- Keyword arguments to
DBNStore.to_parquet
will now allowwhere
andschema
to be specified - Improved record processing time for the
Live
client
- Fixed an issue where validating the checksum of a batch file loaded the entire file into memory
This release drops support for Python 3.8 which has reached end-of-life.
- Added
PriceType
enum for validation ofprice_type
parameter inDBNStore.to_df
- Upgraded
databento-dbn
to 0.22.1
- Fixed return type hint for
metadata.get_dataset_condition
- Removed support for Python 3.8 due to end of life
- Added
mode
parameter toDBNStore.to_csv
to control the file writing mode - Added
mode
parameter toDBNStore.to_json
to control the file writing mode - Added
mode
parameter toDBNStore.to_parquet
to control the file writing mode - Added
compression
parameter toDBNStore.to_file
which controls the output compression format - Added new consolidated publisher values for
XNAS.BASIC
andDBEQ.MAX
- Changed
DBNStore
to be more tolerant of truncated DBN streams
- Changed default write mode for
DBNStore.to_csv
to overwrite ("w") - Changed default write mode for
DBNStore.to_json
to overwrite ("w") - Changed default write mode for
DBNStore.to_parquet
to overwrite ("w")
- Added
databento.read_dbn
alias - Added
mode
parameter toDBNStore.to_file
to control the file writing mode
- Changed default write mode for
DBNStore.to_file
to overwrite ("w")
- Deprecated
databento.from_dbn
and will be removed in a future release, usedatabento.read_dbn
instead
- Added
adjustment_factors.get_range(...)
method forReference
client - Added
security_master.get_range(...)
method forReference
client - Added
security_master.get_last(...)
method forReference
client - Upgraded
databento-dbn
to 0.20.1
- Added new publisher values for
XCIS.BBOTRADES
andXNYS.BBOTRADES
- Fixed an issue receiving multiple DBN v1
ErrorMsg
in theLive
client would cause anInvalidState
error - Fixed an issue where creating
Live
clients in multiple threads could cause aRuntimeError
upon initialization
- Changed
corporate_actions.get_range(...)
to stream compressed zstd data
- Fixed an issue where a symbol list which contained a
None
would produce a convoluted exception
- Added new publisher value for
DBEQ.SUMMARY
- Upgraded
databento-dbn
to 0.20.0
This release adds a new feature to the Live
client for automatically reconnecting when an unexpected disconnection occurs.
- Added
Reference
data client withcorporate_actions.get_range(...)
method - Added
ReconnectPolicy
enumeration - Added
reconnect_policy
parameter to theLive
client to specify client reconnection behavior - Added
Live.add_reconnect_callback
method for specifying a callback to handle client reconnections - Added platform information to the user agent reported by the
Historical
andLive
clients - Upgraded
databento-dbn
to 0.19.1 - Added
BBOMsg
,CBBOMsg
, andStatusMsg
exports to the rootdatabento
package
- Calling
Live.stop
will now clear all user streams and callbacks - Renamed
Session
toLiveSession
in thedatabento.live.session
module
- A disconnected
Live
client can now be reused with a different dataset - Upgraded
databento-dbn
to 0.19.0
- Added export of
StatType
enum fromdatabento_dbn
to the rootdatabento
package
- Upgraded
databento-dbn
to 0.18.2
- Added type alias
TBBOMsg
forMBP1Msg
- Added support for
bbo-1s
,bbo-1m
, andstatus
schemas - Instances of the
Live
client will now callLive.stop
when garbage collected - Added new publisher values for
XNAS.BASIC
andXNAS.NLS
- Upgraded
databento-dbn
to 0.18.1
- Fixed an issue where
heartbeat_interval_s
was not being sent to the gateway - Fixed an issue where a truncated DBN stream could be written by the
Live
client in the event of an ungraceful disconnect
- Output streams of the
Live
client added withLive.add_stream
will now upgrade to the latest DBN version before being written
- Added optional
heartbeat_interval_s
parameter toLive
client for configuring the interval at which the gateway will send heartbeat records - Upgraded
databento-dbn
to 0.18.0 - Added new off-market publisher values for
IFEU.IMPACT
andNDEX.IMPACT
- Renamed
CbboMsg
toCBBOMsg
- Renamed
use_snapshot
parameter inLive.subscribe
function tosnapshot
- All Python exceptions raised by
databento-dbn
have been changed to use theDBNError
type
- Added
use_snapshot
parameter toLive.subscribe
, defaults toFalse
- Added
pip-system-certs
dependency for Windows platforms to prevent a connection issue inrequests
when behind a proxy - Iteration of the
Live
client will now automatically callLive.stop
when the iterator is destroyed, such as when a for loop is escaped with an exception orbreak
statement
- Fixed an issue where
batch.download
andbatch.download_async
would fail if requested files already existed in the output directory - Fixed an issue where
batch.download
,batch.download_async
, andtimeseries.get_range
could use a lot of memory while streaming data - Fixed an issue where reusing a
Live
client with an open output stream would drop DBN records when received at the same time as theMetadata
header
- The
start_date
andend_date
keys in the response fromHistorical.metadata.get_dataset_range
will be removed in a future release. Use the newstart
andend
keys instead, which include time resolution
- The
Historical.batch.download
andHistorical.batch.download_async
methods will now automatically retry the download if a rate limit (HTTP 429) error is received - The
Historical.batch.download
andHistorical.batch.download_async
methods will now retry failed downloads automatically - The
Historical.batch.download
andHistorical.batch.download_async
methods will now download files concurrently - The
output_dir
parameter forHistorical.batch.download
andHistorical.batch.download_async
is now optional and will default to the current working directory if unspecified
- The
enable_partial_downloads
parameter forHistorical.batch.download
andHistorical.batch.download_async
has been removed, partial files will always be resumed which was the default behavior - The parameters for
Historical.batch.download
andHistorical.batch.download_async
have been reordered becauseoutput_dir
is now optional,job_id
now comes first
- Improved exception messages when multiple
ErrorMsg
are received by theLive
client - Upgraded
databento-dbn
to 0.17.1
- Removed live session ID parsing to
int
, that could cause a session to fail when nothing was wrong
- Renamed publishers from deprecated datasets to their respective sources (
XNAS.NLS
andXNYS.TRADES
respectively)
- Deprecated dataset values
FINN.NLS
andFINY.TRADES
- Increase
Live
session connection and authentication timeouts - Added new
F_TOB
andF_MAYBE_BAD_BOOK
variants toRecordFlags
- Fixed an issue where calling
Live.subscribe
from aLive
client callback would cause a deadlock
- Added
DBNStore.insert_symbology_json
convenience method for adding symbology data from a JSON dict or file path - Upgraded
databento-dbn
to 0.16.0
- Changed how
SymbolMappingMsg
objects are ingested byInstrumentMap
to single source the timestamp parsing from thedatabento-dbn
package
- Fixed an issue where setting a timezone in
DBNStore.to_df
could cause invalid symbol mappings
- Changed
Live.add_stream
to use the exclusive write mode when handling file paths so existing files won't be overwritten
- Added
tz
parameter toDBNStore.to_df
which will convert all timestamp fields from UTC to a specified timezone when used withpretty_ts
- Added new publisher values for consolidated DBEQ.MAX
Live.block_for_close
andLive.wait_for_close
will now callLive.stop
when a timeout is reached instead ofLive.terminate
to close the stream more gracefully
- Substantially increased iteration queue size
- Added methods
DBNQueue.enable
andDBNQueue.disable
for controlling queue consumption - Added method
DBNQueue.is_enabled
to signal the queue can accept records - Added method
DBNQueue.is_full
to signal the queue has reached capacity - Added enabled checks to
DBNQueue.put
andDBNQueue.put_nowait
- Iterating a
Live
client after the streaming session has started will now raise aValueError
. CallingLive.start
is not necessary when iterating theLive
client - Moved constant
databento.live.client.DEFAULT_QUEUE_SIZE
todatabento.live.session.DBN_QUEUE_CAPACITY
- Removed
maxsize
parameter fromDBNQueue
constructor.DBNQueue
now subclassesSimpleQueue
instead - Removed property
DBNQueue.enabled
, useDBNQueue.is_enabled
instead - Removed method
DBNQueue.is_half_full
, useDBNQueue.is_full
instead
- Fixed an issue where DBN records could be dropped while iterating
- Fixed an issue where async iteration would block the event loop
- Added
Session.session_id
property which will contain the numerical session ID once a live session has been authenticated - Upgraded
databento-dbn
to 0.15.1
- Renamed
DatabentoLiveProtocol.started
toDatabentoLiveProtocol.is_started
which now returns a bool instead of anasyncio.Event
- Fixed an issue where an error message from the live gateway would not properly raise an exception if the connection closed before
Live.start
was called
This release adds support for transcoding DBN data into Apache parquet.
- Added
DBNStore.to_parquet
for transcoding DBN data into Apache parquet usingpyarrow
- Upgraded
databento-dbn
to 0.15.0
- Removed deprecated
pretty_px
parameter forDBNStore.to_df
;price_type
can be used instead
- Fixed an issue where the
Live
client would not raise an exception when reading an incompatible DBN version - Fixed an issue where sending lots of subscriptions could cause a
BufferError
- Fixed an issue where
Historical.batch.download
was slow - Fixed an issue where
Historical.timeseries.get_range
was slow - Fixed an issue where reading a DBN file with non-empty metadata symbol mappings and mixed
SType
would cause an error when mapping symbols (credit: Jakob Lövhall)
- Added new publisher value for OPRA MIAX Sapphire
- Fixed issue where a large unreadable symbol subscription message could be sent
- Fixed issue where calling
Live.stop
could cause a truncated DBN record to be written to a stream
This release adds support for DBN v2 as well as Python v3.12.
DBN v2 delivers improvements to the Metadata
header symbology, new stype_in
and stype_out
fields for SymbolMappingMsg
, and extends the symbol field length for SymbolMappingMsg
and InstrumentDefMsg
. The entire change notes are available here. Users who wish to convert DBN v1 files to v2 can use the dbn-cli
tool available in the databento-dbn crate. On a future date, the Databento live and historical APIs will stop serving DBN v1.
This release of databento-python
is fully compatible with both DBN v1 and v2, so this upgrade should be seamless for most users.
In some cases, DBN v1 records will be converted to their v2 counterparts:
- When iterating a
DBNStore
and withDBNStore.replay
- When iterating a
Live
client and records dispatched to callbacks
- Added support for Python 3.12
- Improved the performance for stream writes in the
Live
client - Upgraded
databento-dbn
to 0.14.2 - Added
databento.common.types
module to hold common type annotations
- Fixed an issue where specifying an OHLCV schema in
DBNStore.to_ndarray
orDBNStore.to_df
would not properly filter records by their interval - Fixed an issue where
DBNStore.to_ndarray
andDBNStore.to_df
with a non-zero count could get stuck in a loop if the DBN data did not contain any records
DBNStore
iteration andDBNStore.replay
will upgrade DBN version 1 messages to version 2Live
client iteration and callbacks upgrade DBN version 1 messages to version 2- Moved
DBNRecord
,RecordCallback
, andExceptionCallback
types to themdatabento.common.types
module - Moved
AUTH_TIMEOUT_SECONDS
andCONNECT_TIMEOUT_SECONDS
constants from thedatabento.live
module todatabento.live.session
- Moved
INT64_NULL
from thedatabento.common.dbnstore
module todatabento.common.constants
- Moved
SCHEMA_STRUCT_MAP
from thedatabento.common.data
module todatabento.common.constants
- Removed
schema
parameter fromDataFrameIterator
constructor,struct_type
is to be used instead - Removed
NON_SCHEMA_RECORD_TYPES
constant as it is no longer used - Removed
DERIV_SCHEMAS
constant as it is no longer used - Removed
SCHEMA_COLUMNS
constant as it is no longer used - Removed
SCHEMA_DTYPES_MAP
constant as it is no longer used - Removed empty
databento.common.data
module
- Added new publishers for consolidated DBEQ.BASIC and DBEQ.PLUS
- Fixed an issue where
Live.block_for_close
andLive.wait_for_close
would not flush streams if the timeout was reached - Fixed a performance regression when reading a historical DBN file into a numpy array
- Added
map_symbols_csv
function to thedatabento
module for usingsymbology.json
files to map a symbol column onto a CSV file - Added
map_symbols_json
function to thedatabento
module for usingsymbology.json
files to add a symbol key to a file of JSON records - Added new publisher values in preparation for IFEU.IMPACT and NDEX.IMPACT datasets
- Fixed issue where a large unreadable symbol subscription message could be sent
- Fixed an issue where
DBNStore.to_df
withpretty_ts=True
was very slow
- Fixed an issue where
DBNStore.to_csv
andDBNStore.to_json
were mapping symbols even whenmap_symbols
was set toFalse
- Fixed an issue where empty symbology mappings caused a
ValueError
when loading symbols into theDBNStore
instrument map
- Added
price_type
argument forDBNStore.to_df
to specify if price fields should befixed
,float
ordecimal.Decimal
- Added
py.typed
marker file - Upgraded
databento-dbn
to 0.13.0
- Changed outputs of
DBNStore.to_csv
andDBNStore.to_json
to match the encoding formats from the Databento API
- Deprecated
pretty_px
argument forDBNStore.to_df
to be removed in a future release; the defaultpretty_px=True
is now equivalent toprice_type="float"
andpretty_px=False
is now equivalent toprice_type="fixed"
- Added
map_symbols
support for DBN data generated by theLive
client - Added support for file paths in
Live.add_stream
- Added new publisher values in preparation for DBEQ.PLUS
- Upgraded
databento-dbn
to 0.11.1
- Fixed an issue where
DBNStore.from_bytes
did not rewind seekable buffers - Fixed an issue where the
DBNStore
would not map symbols with input symbology ofSType.INSTRUMENT_ID
- Fixed an issue with
DBNStore.request_symbology
when the DBN metadata's start date and end date were the same - Fixed an issue where closed streams were not removed from a
Live
client on shutdown
- Added
ARCX.PILLAR.ARCX
publisher - Added
pretty_px
option forbatch.submit_job
, which formats prices to the correct scale using the fixed-precision scalar 1e-9 (available for CSV and JSON text encodings) - Added
pretty_ts
option forbatch.submit_job
, which formats timestamps as ISO 8601 strings (available for CSV and JSON text encodings) - Added
map_symbols
option forbatch.submit_job
, which appends a symbol field to each text-encoded record (available for CSV and JSON text encodings) - Added
split_symbols
option forbatch.submit_job
, which will split files by raw symbol - Upgraded
databento-dbn
to 0.10.2
- Fixed an issue where no disconnection exception were raised when iterating the
Live
client - Fixed an issue where calling
DBNStore.to_df
,DBNStore.to_json
, orDBNStore.to_csv
withmap_symbols=True
would cause aTypeError
- Removed
default_value
parameter fromHistorical.symbology.resolve
- Swapped the ordering for the
pretty_px
andpretty_ts
boolean parameters
- Fixed an issue where the index column was not serialized with
DBNStore.to_json
- Fixed an issue where timestamps serialized by
DBNStore.to_json
had reduced precision
This release includes improvements to handling large DBN data and adds support for future datasets.
- Added
count
parameter toDBNStore.to_df
andDBNStore.to_ndarray
to help process large files incrementally - Improved memory usage of
DBNStore.to_csv
andDBNStore.to_json
- Added the
Publisher
,Venue
, andDataset
enums - Replace null prices with
NaN
whenpretty_px=True
inDBNStore.to_df()
- Upgraded
databento-dbn
to 0.8.3
- Fixed issue where exception messages were displaying JSON encoded data
- Fixed typo in
BATY.PITCH.BATY
publisher - Reduced floating error when converting prices to floats with
pretty_px=True
DBNStore.to_df
now always utf-8 decodes string fields
- Fixed issue where extra
python
key was sent by theLive
client
- Renamed the
TimeSeriesHttpAPI
class toTimeseriesHttpAPI
- Fixed an issue where
DBNStore.to_csv()
,DBNStore.to_df()
,DBNStore.to_json()
, andDBNStore.to_ndarray()
would consume large amounts of memory
This release includes improvements to the ergonomics of the clients metadata API, you can read more about the changes here.
- Upgraded
databento-dbn
to 0.8.2
- Changed
metadata.list_publishers()
to return a list of publisher details objects - Changed
metadata.list_fields(...)
to return a list of field detail objects for a particular schema and encoding - Changed
metadata.list_fields(...)
to require theschema
andencoding
parameters - Changed
metadata.list_unit_prices(...)
to return a list of unit prices for each feed mode and data schema - Changed
metadata.list_unit_prices(...)
to require thedataset
parameter - Removed
metadata.list_unit_prices(...)
mode
andschema
parameters - Removed
metadata.list_fields(...)
dataset
parameter
- Fixed an issue where starting a
Live
client before subscribing gave an incorrect error message - Fixed an issue where a
Live
client exception callback would fail when the callback function does not have a__name__
attribute
This release includes updates to the fields in text encodings (CSV and JSON), you can read more about the changes here.
- Added
rtype
field to all schemas that was previously excluded
- Reordered fields of DataFrame and CSV encoded records to match historical API
- Fixed an issue where the
end
parameter intimeseries.get_range_async
did not support a value ofNone
- Fixed an issue where
timeseries.get_range
requests would begin with an invalidpath
parameter
- Fixed an issue with release tests
- Fixed an issue with release workflow
- Added
symbology_map
property toLive
client - Added
optional_symbols_list_to_list
parsing function - Changed
Live.add_callback
andLive.add_stream
to accept an exception callback - Changed
Live.__iter__()
andLive.__aiter__()
to send the session start message if the session is connected but not started - Upgraded
databento-dbn
to 0.7.1 - Removed exception chaining from exceptions emitted by the library
- Fixed issue where a large unreadable symbol subscription message could be sent
- Fixed an
ImportError
observed in Python 3.8
- Removed
Encoding
,Compression
,Schema
, andSType
enums as they are now exposed bydatabento-dbn
- Renamed
func
parameter torecord_callback
forLive.add_callback
andLive.add_stream
- Removed
optional_symbols_list_to_string
parsing function
- Fixed issue where
DBNStore.to_df()
would raise an exception if no records were present - Fixed exception message when creating a DBNStore from an empty data source
- Added
DatabentoLiveProtocol
class - Added
metadata
property toLive
- Added support for reusing a
Live
client to reconnect - Added support for emitting warnings in API response headers
- Relaxed 10 minute minimum request time range restriction
- Upgraded
aiohttp
to 3.8.3 - Upgraded
numpy
to 1.23.5 - Upgraded
pandas
to 1.5.3 - Upgraded
requests
to 2.28.1 - Upgraded
zstandard
to 0.21.0
- Removed support for Python 3.7
- Renamed
symbol
toraw_symbol
in definition schema when converting to a DataFrame - Changed iteration of
Live
to no longer yield DBN metadata - Changed
Live
callbacks to no longer yield DBN metadata
- Fixed issue where
Historical.timeseries.get_range
would write empty files on error - Fixed issue with
numpy
types not being handled in symbols field - Fixed optional
end
parameter forbatch.submit_job(...)
- Added support for
statistics
schema - Added batch download support data files (
condition.json
andsymbology.json
) - Renamed
booklevel
MBP field tolevels
for brevity and consistent naming - Upgraded
databento-dbn
to 0.6.1
- Changed
flags
field to an unsigned int - Changed default of
ts_out
toFalse
forLive
client - Changed
instrument_class
DataFrame representation to be consistent with otherchar
types - Removed
open_interest_qty
andcleared_volume
fields that were always unset from definition schema - Removed sunset
timeseries.stream
method - Removed support for legacy stypes
- Added
Live
client for connecting to Databento's live service - Added
degraded
,pending
andmissing
condition variants forbatch.get_dataset_condition
- Added
last_modified_date
field tobatch.get_dataset_condition
response - Upgraded
databento-dbn
to 0.5.0 - Upgraded
DBNStore
to support mixed schema types to support live data
- Changed iteration
DBNStore
to return record types fromdatabento-dbn
instead of numpy arrays - Renamed the
cost
field tocost_usd
forbatch.submit_job
andbatch.list_jobs
(value now expressed as US dollars) - Renamed
product_id
field toinstrument_id
- Renamed
symbol
field in definitions toraw_symbol
- Removed
dtype
property fromDBNStore
- Removed
record_size
property fromDBNStore
- Removed
bad
condition variant frombatch.get_dataset_condition
- Removed unused
LiveGateway
enum - Removed
STATSTICS
fromSchema
enum - Removed
STATUS
fromSchema
enum - Removed
GATEWAY_ERROR
fromSchema
enum - Removed
SYMBOL_MAPPING
fromSchema
enum
- Deprecated
SType.PRODUCT_ID
toSType.INSTRUMENT_ID
- Deprecated
SType.NATIVE
toSType.RAW_SYMBOL
- Deprecated
SType.SMART
toSType.PARENT
andSType.CONTINUOUS
- Changed
end
andend_date
to optional to support new forward-fill behaviour - Upgraded
zstandard
to 0.20.0
- Added support for
imbalance
schema - Added
instrument_class
,strike_price
, andstrike_price_currency
to definition schema - Changed parsing of
end
andend_date
params throughout the API - Improved exception messages for server and client timeouts
- Upgraded
databento-dbn
to 0.4.3
- Renamed
Bento
class toDBNStore
- Removed
metadata.list_compressions
(redundant with docs) - Removed
metadata.list_encodings
(redundant with docs) - Removed optional
start
andend
params frommetadata.list_schemas
(redundant) - Removed
related
andrelated_security_id
from definition schema
- Improved use of the logging module
- Removed
record_count
property from Bento class - Changed
metadata.get_dataset_condition
response to a list of condition per date
- Fixed bug in
Bento
where invalid metadata would prevent iteration
- Added
from_dbn
convenience alias for loading DBN files
- Fixed bug in
Bento
iteration where multiple readers were created
- Added
batch.list_files(...)
method - Added
batch.download(...)
method - Added
batch.download_async(...)
method - Integrated DBN encoding 0.3.2
- Dropped support for DBZ encoding
- Renamed
timeseries.stream
totimeseries.get_range
- Renamed
timeseries.stream_async
totimeseries.get_range_async
- Changed
.to_df(...)
pretty_ts
default argument toTrue
- Changed
.to_df(...)
pretty_px
default argument toTrue
- Changed
.to_df(...)
map_symbols
default argument toTrue
- Deprecated
timeseries.stream(...)
method - Deprecated
timeseries.stream_async(...)
method
- Added support for
definition
schema - Updated
Flags
enum - Upgraded
dbz-python
to 0.2.1 - Upgraded
zstandard
to 0.19.0
- Added
metadata.get_dataset_condition
method toHistorical
client - Upgraded
dbz-python
to 0.2.0
- Fixed dataframe columns for derived data schemas (dropped
channel_id
) - Fixed
batch.submit_job
requests fordbz
encoding - Updated
quickstart.ipynb
jupyter notebook
- Upgraded
dbz-python
to 0.1.5 - Added
map_symbols
option for.to_df()
(experimental)
- Initial release