diff --git a/History.md b/History.md
index d5a958d3..a8981fce 100644
--- a/History.md
+++ b/History.md
@@ -1,3 +1,7 @@
+# v0.5.5
+* Fixed issues with ordering of headers when specifying headers in a write stream [#77](https://github.com/C2FO/fast-csv/pull/77)
+* Fixed issue where headers were not being written if no data was supplied to write stream.
+
# v0.5.4
* Fixed issues with error handling and not registering an error handler on stream [#68](https://github.com/C2FO/fast-csv/issues/68)
diff --git a/docs/History.html b/docs/History.html
index 94b40eec..e573794e 100644
--- a/docs/History.html
+++ b/docs/History.html
@@ -176,28 +176,33 @@
-
v0.5.4
+v0.5.5
+
+- Fixed issues with ordering of headers when specifying headers in a write stream #77
+- Fixed issue where headers were not being written if no data was supplied to write stream.
+
+v0.5.4
- Fixed issues with error handling and not registering an error handler on stream #68
- Added support for ignoring quoting while parsing #75
-v0.5.3
+v0.5.3
- Fixed issues with
v0.11
stream implementation #73
- Fixed issues with
pause/resume
and data events in v0.10
#69
- Fixed the double invoking of done callback when parsing files #68
- Refactored tests
-v0.5.2
+v0.5.2
- Fixed issue with
writeToString
and writeToPath
examples #64
- Fixed issue with creating a csv without headers #63
-v0.5.1
+v0.5.1
- Fixed issue where line data was not being passed between transforms in the parser_stream
-v0.5.0
+v0.5.0
- Added support for async transforms #24
- Added support for async validation
@@ -214,30 +219,30 @@ v0.5.0
- More tests
- Code refactor and clean up
-v0.4.4
+v0.4.4
- Added support for comments. #56
-v0.4.3
+v0.4.3
- Added ability to include a
rowDelimiter
at the end of a csv with the includeEndRowDelimiter
option #54
- Added escaping for values that include a row delimiter
- Added more tests for new feature and escaping row delimiter values.
-v0.4.2
+v0.4.2
- Added ability to specify a rowDelimiter when creating a csv.
- Added discardUnmappedColumns option to allow the ignoring of extra data #45
-v0.4.1
+v0.4.1
- Fixed race condition that occurred if you called pause during a flush.
-v0.4.0
+v0.4.0
- Fixed misspelling of
delimiter
#40
-v0.3.1
+v0.3.1
- Added transform support to formatters
- When using
createWriteStream
you can now you the transform
method to specify a row transformer.
@@ -245,36 +250,36 @@ v0.3.1
-v0.3.0
+v0.3.0
- You can now specify
objectMode
when parsing a csv which will cause data
events to have an object emitted.
- You can now pipe directly to the stream returned from
createWriteStream
- You can now transform csvs by piping output from parsing into a formatter.
-v0.2.5
+v0.2.5
- Fixed issue where not all rows are emitted when using
pause
and resume
-v0.2.4
+v0.2.4
- Added more fine grained control to
.pause
and .resume
- You can now pause resume between chunks
-v0.2.3
+v0.2.3
- Add new
createWriteStream
for creating a streaming csv writer
-v0.2.2
+v0.2.2
- Fixed issue with having line breaks containing
\r\n
-v0.2.1
+v0.2.1
- Fixed issue with
\r
line break in parser
-v0.2.0
+v0.2.0
- Added multiline value support
- Updated escaping logic
@@ -283,17 +288,17 @@ v0.2.0
- Removed support for having two quote types instead it just supports a single quote and escape sequence.
Source code (zip)
-v0.1.2
+v0.1.2
- Fixed issue with formatter handling undefined or null values.
- Changed formatter not not include a new line at the end of a CSV.
- Added pause and resume functionality to ParserStream
-v0.1.1
+v0.1.1
- Added trim, ltrim, and rtrim to parsing options
-v0.1.0
+v0.1.0
diff --git a/docs/index.html b/docs/index.html
index 5ca42746..05b32d3e 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -178,17 +178,17 @@
-Fast-csv
+Fast-csv
This is a library that provides CSV parsing and formatting.
NOTE As of v0.2.0 fast-csv
supports multi-line values.
-Installation
+Installation
npm install fast-csv
-Usage
-Parsing
+Usage
+Parsing
All methods accept the following options
objectMode=true
: Ensure that data
events have an object emitted rather than the stringified version set to false to have a stringified buffer.
-headers=false
: Ste to true if you expect the first line of your CSV
to contain headers, alternatly you can specify an array of headers to use.
+headers=false
: Set to true if you expect the first line of your CSV
to contain headers, alternatly you can specify an array of headers to use.
ignoreEmpty=false
: If you wish to ignore empty rows.
discardUnmappedColumns=false
: If you want to discard columns that do not map to a header.
delimiter=','
: If your data uses an alternate delimiter such as ;
or \t
.
@@ -350,7 +350,7 @@ Parsing
.on("end", function(){
console.log("done");
});
-Validating
+Validating
You can validate each row in the csv by providing a validate handler. If a row is invalid then a data-invalid
event
will be emitted with the row and the index.
var stream = fs.createReadStream("my.csv");
@@ -389,7 +389,7 @@ Validating
.on("end", function(){
console.log("done");
});
-Transforming
+
You can transform data by providing a transform function. What is returned from the transform function will
be provided to validate and emitted as a row.
var stream = fs.createReadStream("my.csv");
@@ -419,7 +419,7 @@ Transforming
.on("end", function(){
console.log("done");
});
-Formatting
+
fast-csv
also allows to you to create create a CSV
from data.
Formatting accepts the same options as parsing with an additional transform
option.
@@ -439,7 +439,7 @@ Formatting
-Data Types
+Data Types
When creating a CSV fast-csv
supports a few data formats.
Objects
You can pass in object to any formatter function if your csv requires headers the keys of the first object will be used as the header names.
@@ -479,7 +479,7 @@ Data Types
//Generated CSV
//a,a,b,b,c,c
//a1,a2,b1,b2,c1,c2
-Formatting Functions
+
createWriteStream(options)
or .format(options)
This is the lowest level of the write methods, it creates a stream that can be used to create a csv of unknown size and pipe to an output csv.
var csvStream = csv.createWriteStream({headers: true}),
@@ -739,7 +739,7 @@ Formatting Functions
console.log(data); //"A,B\na1,b1\na2,b2\n"
}
);
-Piping from Parser to Writer
+Piping from Parser to Writer
You can use fast-csv
to pipe the output from a parsed CSV to a transformed CSV by setting the parser to objectMode
and using createWriteStream
.
csv
.fromPath("in.csv", {headers: true})
@@ -774,10 +774,10 @@ Piping from Parser to Writer
.fromPath("in.csv", {headers: true})
.pipe(formatStream)
.pipe(fs.createWriteStream("out.csv", {encoding: "utf8"}));
-Quoting Columns
+Quoting Columns
Sometimes you may need to quote columns is certain ways in order meet certain requirements. fast-csv
can quote columns and headers almost anyway you may need.
Note in the following example we use writeToString
but the options option are valid for any of the formatting methods.
-quoteColumns
+quoteColumns
//quote all columns including headers
var objectData = [{a: "a1", b: "b1"}, {a: "a2", b: "b2"}],
arrayData = [["a", "b"], ["a1", "b1"], ["a2", "b2"]];
@@ -800,7 +800,7 @@ quoteColumns
//a1,"b1"
//a2,"b2"
});
-quoteHeaders
+
//quote all columns including headers
var objectData = [{a: "a1", b: "b1"}, {a: "a2", b: "b2"}],
arrayData = [["a", "b"], ["a1", "b1"], ["a2", "b2"]];
@@ -832,9 +832,9 @@ quoteHeaders
//"a1","b1"
//"a2","b2"
});
-License
+License
MIT https://github.com/C2FO/fast-csv/raw/master/LICENSE
-Meta
+
- Code:
git clone git://github.com/C2FO/fast-csv.git
- Website: http://c2fo.com
diff --git a/package.json b/package.json
index 8f9c3c9a..850b1584 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "fast-csv",
- "version": "0.5.4",
+ "version": "0.5.5",
"description": "CSV parser and writer",
"main": "index.js",
"scripts": {
@@ -26,8 +26,7 @@
"grunt-it": "~0.3.1",
"grunt": "~0.4.1",
"grunt-contrib-jshint": "~0.10.0",
- "grunt-exec": "^0.4.5",
- "event-stream": "^3.1.7"
+ "grunt-exec": "^0.4.5"
},
"engines": {
"node": ">=0.10"
diff --git a/test/headersbug.test.js b/test/headersbug.test.js
deleted file mode 100644
index ce4eeb82..00000000
--- a/test/headersbug.test.js
+++ /dev/null
@@ -1,15 +0,0 @@
-var it = require("it"),
- assert = require("assert"),
- es = require("event-stream"),
- csv = require("../index");
-
-it.describe("formatting headers", function (it) {
- it.should("be possible by giving an ordered array", function (next) {
- var input = es.readArray([{first: "1", second: "2"}]);
- var csvStream = csv.createWriteStream({headers: ["second", "first"]});
- input.pipe(csvStream).pipe(es.writeArray(function (err, array) {
- assert.deepEqual(array.join(""), "second,first\n2,1");
- next();
- }));
- });
-});
diff --git a/test/issues.test.js b/test/issues.test.js
index 50054fb9..bd792385 100644
--- a/test/issues.test.js
+++ b/test/issues.test.js
@@ -11,7 +11,6 @@ it.describe("github issues", function (it) {
it.timeout(60000);
it.describe("#68", function (it) {
-
it.should("handle parse errors properly", function (next) {
var actual = [];
csv
@@ -29,4 +28,47 @@ it.describe("github issues", function (it) {
});
});
});
-});
\ No newline at end of file
+
+ it.describe("#77", function (it) {
+ it.should("sort columns by order of headers defined", function (next) {
+ var writable = fs.createWriteStream(path.resolve(__dirname, "assets/test.csv"), {encoding: "utf8"}),
+ stream = csv.createWriteStream({headers: ["second", "first"]})
+ .on("error", next);
+
+ writable.on("finish", function () {
+ assert.equal(fs.readFileSync(path.resolve(__dirname, "assets/test.csv")).toString(), "second,first\n2,1");
+ fs.unlinkSync(path.resolve(__dirname, "assets/test.csv"));
+ next();
+ });
+
+ stream.pipe(writable);
+
+ [{first: "1", second: "2"}].forEach(function (item) {
+ stream.write(item);
+ });
+
+ stream.end();
+ });
+
+ it.should("write headers even with no data", function (next) {
+ var writable = fs.createWriteStream(path.resolve(__dirname, "assets/test.csv"), {encoding: "utf8"}),
+ stream = csv.createWriteStream({headers: ["first", "second"]})
+ .on("error", next);
+
+ writable.on("finish", function () {
+ assert.equal(fs.readFileSync(path.resolve(__dirname, 'assets/test.csv')).toString(), "first,second\n,");
+ fs.unlinkSync(path.resolve(__dirname, 'assets/test.csv'));
+ next();
+ });
+
+ stream.pipe(writable);
+
+ [{}].forEach(function (item) {
+ stream.write(item);
+ });
+
+ stream.end();
+
+ });
+ });
+});