Ideas

Treasure Data's primary idea portal. 

Submit your ideas & feature requests directly to our product requirements team! We look forward to hearing from you.

Export Valid JSON

I tried exporting JSON and just got back:
[field, field2, field3, field4]
[field, field2, field3, field4]
...

Instead of something useful like:
{field: value, field2: value, field3: value}
{field: value, field2: value, field3: value}

Why would it export invalid line by line JSON? From the looks of it, it's basically CSV but with a [ prepended and a ] appended on each line and then a missing column header row. So it's impossible to really tell to which fields the values belong.

For now I've cracked open Google Refine and converted the CSV export to actual line by line JSON but this should probably be fixed.

Of course if the format is for some particular reason, then it'd be nice to have this different format.

  • Tom Maiaroto
  • Jun 10 2016
  • Tom Maiaroto commented
    June 10, 2016 23:36

    That makes sense. I do realize after converting it just how much larger it became. However, I think that's fine in many cases. It would be nice and convenient to see a new export format.

    I've been dealing with similar memory issues myself, the Docker containers I have running through Amazon's ECS Tasks are all configured with small amounts of RAM and I don't want to have to change that (for cost and because I like doing things cheap as a challenge). I've even had success using Node.js streams to avoid memory issues. Though if I had my preference I'd be using Go (saves on cost even more).

    Taking it a step farther and more abstracted - Having some sort of stream transformation process (where users could upload the transform logic/script) that then piped the export to S3 or somewhere would be nice. Then JSON or any format can be achieved. Just puts more work on th user, but I'm good with that.

  • Toru Takahashi commented
    June 10, 2016 23:36

    Hi This is Toru from TreasureData Support.
    Are you talking about `td job:show ~ --format json`, right?

    Unfortunately there is a historical reason. If the result set is really large, converting to real JSON format caused the out of memory error at the server side. Also this caused an out of memory issue at the customer's side, so we decided to use the current format.
    We wanted to change the name, but we cannot change the format because this breaks the backward compatibility.

    Thank you for your feedback.
    Our product team keep to treat this request as introducing new JSON format to return line delimited JSON or similar format.

  • Tom Maiaroto commented
    June 10, 2016 23:36

    I realize this also makes the exported file size much larger.