Skip to content
This repository was archived by the owner on Dec 22, 2021. It is now read-only.

Issue 22: huge values are valid JSON #81

Open
wants to merge 23 commits into
base: master
Choose a base branch
from

Conversation

aliamcami
Copy link
Collaborator

The question that originated this analyse is: "Are all big values valid JSON?"

Overview

All the greatest values are JSON, but they represent very little percentage of the whole data.

Most of the data have small value_len

(mean = 1356 for the 10% sample)
  • 95,58% of the data have value_len smaller than the mean
  • 4,42% are bigger than the mean
  • 9.35% are valid JSON

Values above the mean:

  • 61,54% are NOT valid JSON
  • 38,46% are valid JSON

Values that are 1 standard deviation (std) above the mean

(std = 26310 for 10% sample):
  • 0,11% are NOT valid JSON
  • 99,88% are valid JSON
  • The bigger the value the greater the chance of being a valid JSON

Values 4 std above the mean

  • 100% are valid JSON
  • The biggest non-JSON value have the length of 104653

The top 46745 gratest value_len are valid JSONs, that is 9.35% of the filtered sample (value_len > mean) and 0,41% of the original 10% sample.

@aliamcami
Copy link
Collaborator Author

I was questioned by @birdsarah:

"what are your next questions? i'm keen to see from you what questions this work has thrown up for you - are there groups / themes to these questions? if you were concerned about tracking / privacy what would you look at next?"

So, I organized some of my questioning in groups/themes and what I got is the following:

About JSONs:

  • The JSON values are always from the same location or related domains?

  • Are there a set of location domains that always produces a JSON?

  • Does the JSON values follow a structure pattern? What pattern?

  • What data does the JSON hold? Is there any pattern on content?

  • Do they have nested JSON? Css? Html? Javascript? Recursive study on JSON properties.

  • Is a JSON's structure for a single script_url domain always the same?

  • Is every JSON with the same structure produced by the same script_url domain?

General

I'm think some things here maybe a crawler investigation or just wiki reading, since someone may have already described and explained. I just need to find, read and understand it.

  • Are there other valid data types like html, css... in the values column or just JSON?
  • Where does the value comes from? What is it used for?

Smal: value_len < mean

  • What are the small values?
  • Does the smaller values have any pattern?
  • What the majority data type?

Medium: mean < value_len < (mean + std)

  • How many rows are there in the intersection of “no JSON” and “everything is JSON” ?
  • What are they? Are they from a specific script_url domain? Or realated domains?

Big: value_len > (mean + std)

  • What are the big non-JSON values?

Security and data sharing:

  • Do the value columns have any javascript? nested javascript?
  • Do the javascripts in the dataset contain known malicious behaviors?
  • Can they collect data that threatens user's privacy?

if you were concerned about tracking / privacy what would you look at next?

I would love to deeper analyze the javascripts, but that’s a whole other area of knowledge. I think I can study common patterns of privacy intrusion and malicious behavior in javacript and try to correlate with the scripts present in the dataset. A related analysis to what was done in the medium article with cryptocoin mining scripts.

Statistical knowledge / coincidence:

The mean of the original 10% sample is pretty similar to the std of the sample taken after filtering for values above the mean

  • why?
  • Is it a coincidence?
  • Is it always like this?
  • Is it a statistical pattern?

Copy link
Contributor

@birdsarah birdsarah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great start.

As I mentioned before I'd really like to see some visualizations. In particular:

  1. a histogram, or something like it, of the value_len column. I think this will help you answer some of your own questions about the mean, standard deviation, etc. It's important to think about how the shape of our data can affect summary statistics like the mean.
  2. a plot of % json compared to, say, minimum value_len. You could start with your subset of everything over total mean (1,356) to get a feel for it.

Your follow-on questions that you posted are great. I'd like to see them included in the PR - perhaps in the README - it's likely that I'll want to turn many of them into issues later. But we'll cross that bridge when we come to it.

I think the most interesting questions which are the natural from where you are that I'd love to see you tackle one or two of are:

  • The JSON values are always from the same location or related domains?
  • Are there a set of location domains that always produces a JSON?
  • Does the JSON values follow a structure pattern? What pattern?
  • What data does the JSON hold? Is there any pattern on content?
  • What are the big non-JSON values?

Also, if you want to output samples of these values and save them as text files as part of your folder, or gists, that might help give the future reader context.

There are some notebook cleanups that I would eventually like to be done before merging, but they are not as important. Things like using: values (such as the mean) wherever possible rather than manually copying, and making sure the narrative all makes sense when reading the whole notebook.

@aliamcami aliamcami changed the title Issue 22: huge values are valid JSON [WIP] Issue 22: huge values are valid JSON Mar 27, 2019
@aliamcami
Copy link
Collaborator Author

aliamcami commented Mar 27, 2019

Thank you for your incredible review.
I updated to WIP and I'll leave it until the follow is ready:

  • Study and implement how to best plot the requested graphs
  • Make a readme with those questions
  • Cleanup the notebook

About the values hardcoded, I actually left them hardcoded to eliminate the need to recalculate them every time I started the notebook, since it does take quite some time for me. Should I have a file with this saved then? Or variables holding the hardcoded value? Or leave it to be calculated every time?

About the follow up questions, should I open a new PR specifically for each of them or increment this one when I start to tackle them?

@birdsarah
Copy link
Contributor

birdsarah commented Mar 27, 2019 via email

@birdsarah birdsarah changed the title [WIP] Issue 22: huge values are valid JSON Issue 22: huge values are valid JSON [WIP] Mar 28, 2019
@aliamcami
Copy link
Collaborator Author

@birdsarah I have included the following:

  • Graphs for visualisation for the previous analysis
    • (notebook updated to: "isJson_Quantitative_Comparasion.ipynb")
  • Research/analysis on how the location domain correlates to the value column
    • (new notebook named: "isJson_correlation_domain_and_value.ipynb")
  • Readme with the future questions and overview update
  • Notebook cleanup

@aliamcami aliamcami changed the title Issue 22: huge values are valid JSON [WIP] Issue 22: huge values are valid JSON Mar 31, 2019
Copy link
Contributor

@birdsarah birdsarah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!

isJson_dataPrep:

  • still needs clean-up - if I tried to run it in order it actually wouldn't give the same results as you currently have.
  • i'm suspicious about your notebook because it's not showing a warning for the rows df['location_domain'] = df.location.apply(extract_domain) dask should complain about the lack of specified meta attribute df['location_domain'] = df.location.apply(extract_domain, meta='O') is what you need
  • pro dask tip (1) - whenever you're significantly reducing the size of your data use df.repartition() to put your data into a proportionately smaller number of partitions. this will make future computation a little quicker as there's overhead associated with opening and closing each partition
  • pro dask tip (2) - seeing a lot of nanny memory error - sigh working with the value column is hard. in this case i have found it generally works for me to not use the distributed client. this makes the processing go slower but it is generally very reliable. To do this never run the client = Client() cell. Instead:
from dask.diagnostics import ProgressBar
# set up your dataframe
with ProgressBar():
    df.to_parquet(.....)

You should get a progress bar to see how it's going. Because you're not using dask distributed you won't get any other kind of insight on progress.

isJsoncorrelationDomain:

  • pie charts.....not my favorite :) some people say they should never be used. They do have at least one specific application - showing parts of a whole comparsions. But there use is definitely not needed here. A simple bar chart is probably the most appropriate viz here. People find it easy to compare heights.
  • Overall this notebook is hard for me to follow. The code is very compact in a way that's not necessarily bad but is hard to scan - using more verbose variable names would increase readability. Perhaps a little more text along the way, perhaps in markdown columns to explain each plot.
  • I like your use of md5 to figure out more efficiently if values are exactly the same. I can think of some potential shortcomings but it's a fine start: in particular, you're hashing a string. but the string can be different even if the json data is the same e.g. different key order. or what if one is a subset of the other
  • you have chosen to examine location_domain. location is where the action was happening, but script_url is the script that actually did the getting or setting of the json.
  • consider using the "operation" column to see if these are multiple reads of the same data or the same data being set over and over.

isJson_QuantitativeComparison:

  • I'm curious about the variable name cdf that you chose - what does it stand for in your mind - computed dataframe?
  • plot in cell 5 - nice. - consider using a log axis on the y axis so you can pick up more detail on the right side of the graph
  • plot in cell 7 - right idea - a few tweaks could be a lot more illuminating. firstly consider the fact that you're now comparing two populations of different sizes so the absolute frequency is less interesting than changing them all into % so you can see that, say 40% of is_json=True is at value x1, an 40% of is_json=False is at value x2 (a quick search found this reference https://www.stat.auckland.ac.nz/~ihaka/787/lectures-distrib.pdf which looks good but is r focused). A very quick rework of your histogram, with and without log axes looks like this:

Screenshot from 2019-03-31 19-35-33
Screenshot from 2019-03-31 19-40-53

  • plot in cell 15 - definitely should be a bar chart

  • cell 16 and 17, super excited to see you getting stuck in on statistics - for submission of the final analysis only include these where you have a specific link back to a property of the dataset / point you're trying to make.

  • I still want to see the plot x axis = value_len (really this value len or lower) and y axis = % valid json

@birdsarah
Copy link
Contributor

I still want to see the plot x axis = value_len (really this value len or lower) and y axis = % valid json

This would ideally be across all values not just above the mean.

@aliamcami aliamcami changed the title Issue 22: huge values are valid JSON Issue 22: huge values are valid JSON [WIP] Apr 1, 2019
@aliamcami
Copy link
Collaborator Author

Thank you for the amazing review, I have a better idea what to do (and how). Thank you!

@aliamcami aliamcami changed the title Issue 22: huge values are valid JSON [WIP] Issue 22: huge values are valid JSON Apr 22, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants