This file documents recent notable changes to this project. The format of this file is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Added
sensor
field toOpLog
.
0.20.1 - 2024-10-22
- Fixed empty value of vector type from
vec![0]
toVec::new()
when sending Datelake export file.
- Changed configuration field names.
root
toca_certs
to handle multiple CA certs.
- Renamed
GIGANTO_VERSION
toREQUIRED_GIGANTO_VERSION
.
0.20.0 - 2024-09-25
- Updated the version of giganto-client to version 0.20.0. Updating to this
version results in the following changes.
- Updated the version of quinn, rustls from 0.10, 0.21 to 0.11, 0.23. With the update to this version, the usage of the quinn and rustls crates has changed, so code affected by the update has also been modified.
- Modified parsing code in zeek log and giganto log due to changes in conn, http, smtp, ntlm, ssh, tls protocol fields.
- Support for sending
giganto
log for new protocols. (Bootp
,Dhcp
). - Changed
GIGANTO_VERSION
to "0.21.0"
- Change to send events in unit of 100 for protocol events.
- Applied code import ordering by
StdExternalCrate
. From now on, all code is expected to be formatted usingcargo fmt -- --config group_imports=StdExternalCrate
.
0.19.0 - 2024-05-14
- Change to read all command line parameters in config toml file.
- Removes the option to start from a specific line (
-f
), since skip lines also allow sending from a specific line. - Modify the skip count/send count/last sent line options(
-s
/-c
/-r
), which only worked with logs, to work with all conditions. - Modify it so that folder polling is applied first.
- Remove the option for output type.(
-o
) The associated functionality is now deprecated.
- Removes the option to start from a specific line (
- Changed configuration field names.
roots
toroot
to handle using a single root.giganto_ingest_addr
togiganto_ingest_srv_addr
.
0.18.0 - 2024-01-25
- Added function to send sysmon events from elastic search.
- input: elastic
-E
option needed for elastic search.- Added elastic search configuration fields to config file.
- Modified netflow event according to giganto-client.
- Modified config file
0.17.5 - 2023-11-09
- Supports Security logs, See details in README.
0.17.4 - 2023-10-23
- Supports
Netflow5
,Netflow9
pcap.
- Modified kerberos event to support giganto-client.
0.17.3 - 2023-08-23
- Added the line number to convert error message.
- Supports sysmon csv log.
- kind: "process_create", "file_create_time", "network_connect", "process_terminate", "image_load", "file_create", "registry_value_set", "registry_key_rename", "file_create_stream_hash", "pipe_event", "dns_query", "file_delete", "process_tamper", "file_delete_detected"
- Replaced
lazy_static
with the newstd::sync::OnceLock
.
0.17.2 - 2023-07-11
- Added a list of supported protocols to
GIGANTO_ZEEK_KINDS
.
- Changed the output option's default value to "giganto".
- Changed the input option to required option.
- Removed unused confd files.
0.17.1 - 2023-07-10
- Support for sending
giganto
log for new protocols. (mqtt
).
0.17.0 - 2023-07-04
- Support for sending
giganto
log for new protocols. (smb
,nfs
). Fornfs
, zeek log does not exist, and forsmb
, the protocol generates multiple types of logs (conn.log/kerberos.log/smb_files.log, etc.), So it only supports sending Giganto's log files.
0.16.0 - 2023-06-27
- Support for extended
struct Http
.orig_filenames: Vec<String>
orig_mime_types: Vec<String>
resp_filenames: Vec<String>
resp_mime_types: Vec<String>
- Support for sending
giganto
/zeek
log for new protocols. (ldap
,tls
,ftp
). The structure ofTls
was defined based on the field values sent by aicer's packet extraction program. As a result, many fields will be insufficient when transmitted by conventionalzeek log
(ssl.log), and the insufficient fields will be filled with the default value("-"/0) and transmitted.
0.15.0 - 2023-05-18
- Modified to only send giganto-version during handshake process by removing ‘-reproduce’ because the agent name is included in the certificate.
- Bump giganto-client, quinn, rustls to latest version.
0.14.0 - 2023-03-30
- Add ctrlc for zeeklog and oplog when the grow option is given.
- Add common field (5-tuple + duration)
- Add additional
-m
options with giganto export file
- Change
duration
field name tolast_time
. (Except Session struct)
- Dropped Kafka server support.
0.13.0 - 2023-01-04
- Send zeek conn, http, rdp, smtp to giganto with kind option
"conn", "http", "rdp", "smtp", "ntlm", "kerberos"
"ssh", "dce_rpc"
- Add zeeklog skip option
-f
option read line from given line number (at least 1), - Send operation log to giganto with kind option
"oplog"
-g
grow option uses alone; doesn't take true or false no more.
- Deprecated Kafka server.
0.12.0 - 2022-11-02
- Send line to line when giganto connected
0.11.0 - 2022-10-05
-
Support for x86_64-unknown-linux-musl.
-
Support Giganto server.
-o "giganto" -C "tests/config.toml"
to test-G
option to set giganto server address (default: 127.0.0.1:38370)-N
option is giganto server name, (default: localhost)-C
option is certificate path toml file-k
option to set log kind to giganto, like topic of Kafka[certification] cert = "tests/cert.pem" key = "tests/key.pem" roots = ["tests/root.pem"]
-
Protocol version check before send log.
-
Added termination logic for one-shot req/resp to giganto.
- Dropped support for packets. Run zeek and read its log files instead.
- Dropped Docker support. Instead, instructions to build a portable binary was added to README.
0.10.0 - 2021-06-11
-V
option to display the version number.
- librdkafka is no longer needed.
- An invalid command-line option value is not converted into the default value; instead it results in an error.
- No longer requires OpenSSL.
0.9.10 - 2020-09-08
-
"event_id = time(32bit) + serial-number(24bit) + data-origin(8bit)" The "time" is current time of system, and "data-origin" is attached also. And "serial-number" is rotating from 0 to max 24bit number.
The value of "event_id" is not continuous because of this.
If REPRODUCE finishes processing 24bit events within 1 second (ie, before the "time" value is changed), the serial number starts from 0 again, so the "event_id" that follows is less than the "event_id" of the previous event.
Patch: the "event_id" created later has a larger value than before, at all time.
0.9.9 - 2020-06-17
- modify magic code to identify pcap-ng
- modify code to send pcap-ng pcap file
- follow what ClangTidy says. destroy c++11 warnings
0.9.8 - 2020-04-29
- "event_id" format is changed.
- previous format: event_id(64bit) = datasource id(upper 16bit) + sequence number(lower 48bit)
- new format: event_id(64bit) = current system time in seconds(upper 32bit) + sequence number (lower 24bit) + datasource id(lowest 8bit)
0.9.7 - 2020-04-08
- Add '-j' option: user can set the initial event_id number. Without this option, event_id will be begin at 1 or skip_count+1.
- Add '-v' option: REproduce watches the input directory and sends it when new files are found.
- Instead of the name 'report.txt', use the Kafka topic name as the file name.
- The default value of
message.timeout.ms
is set to 5,000 ms, the default value oflinger.ms
. This allows to link REproduce against librdkafka>=1.0.
0.9.6 - 2019-07-22
- (test) For PCAP, this version wil send payload only rather than session + payload 2KB. And sessions.txt does not created.
- Produce success messages are displayed in every 100 success, i.e., around 100MB sent.
0.9.5 - 2019-07-12
- 'report.txt', 'session.txt' file name changed to
report.txt-YYYYMMDDHHMMSS
andsessions.txt-YYYYMMDDHHMMSS
- bug fixed: event_id for TCP, UDP, ICMP is still session number. it's fixed to send packet number.
0.9.4 - 2019-07-10
- When REproduce send PCAP, it will save session information into
/report/sessions.txt
file. If the '/report' directory does not exist, REproduce will try to open in the current directory where REproduce is running in. The session information is appended at the end of the file. You should clear it before REproduce run if you want to get clean data.
0.9.3 - 2019-07-08
- The event_id for pcap changed to the number of packets read from that PCAP file. In previous version event_id was session number.
report.txt
file will be created in/report/
directory if it is exist, like/report/report.txt
. If not, REproduce will try to open in the current directory where REproduce is running in. If you want to run REproduce in Docker, you should bind the/report
to see the report file from the host.- Dockerfile changed to use g++-8