diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..05f9f62 --- /dev/null +++ b/404.html @@ -0,0 +1,3183 @@ + + + +
+ + + + + + + + + + + + + + + + + + +Hadoop is not a single product, but rather a software family. Its common components consist of the following:
+Hadoop structures data using Hive, but can handle unstructured data easily using Pig.
+Amazon EMR includes
+Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For the Scala API, Spark 2.3.0 uses Scala 2.11.
+All you need is to have java installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation.
+ +Download the Scala binaries for windows -- you will need Scala 11.x (not 10.x or 12.x) for Spark 2.3
+ +Test correct installation of scala:
+ +Set PATH for Scala if needed:
+ +Test that Spark is properly installed:
+ +On Windows, use CMD or PowerShell, not git bash
+You can fix this problem in two ways
+Then
+Windows binaries for some Hadoop versions
+To run Spark interactively in a Python interpreter, use bin/pyspark
:
Or submit Spark jobs:
+ +DataFrame operations:
+Example:
+// In the Regular Expression below:
+// ^ - Matches beginning of line
+// .* - Matches any characters, except newline
+
+df
+ .filter($"article".rlike("""^Apache_.*"""))
+ .orderBy($"requests".desc)
+ .show() // By default, show will return 20 rows
+
+// Import the sql functions package, which includes statistical functions like sum, max, min, avg, etc.
+import org.apache.spark.sql.functions._
+
+df.groupBy("project").sum().show()
+
A new column is constructed based on the input columns present in a dataframe:
+df("columnName") // On a specific DataFrame.
+col("columnName") // A generic column no yet associated with a DataFrame.
+col("columnName.field") // Extracting a struct field
+col("`a.column.with.dots`") // Escape `.` in column names.
+$"columnName" // Scala short hand for a named column.
+expr("a + 1") // A column that is constructed from a parsed SQL Expression.
+lit("abc") // A column that produces a literal (constant) value.
+
Column objects can be composed to form complex expressions:
+ +CSV - Create a DataFrame with the anticipated structure
+val clickstreamDF = sqlContext.read.format("com.databricks.spark.csv")
+ .option("header", "true")
+ .option("delimiter", "\\t")
+ .option("mode", "PERMISSIVE")
+ .option("inferSchema", "true")
+ .load("dbfs:///databricks-datasets/wikipedia-datasets/data-001/clickstream/raw-uncompressed")
+
PARQUET - To create Dataset[Row] using SparkSession
+val people = spark.read.parquet("...")
+val department = spark.read.parquet("...")
+
+people.filter("age > 30")
+ .join(department, people("deptId") === department("id"))
+ .groupBy(department("name"), "gender")
+ .agg(avg(people("salary")), max(people("age")))
+
val clickstreamNoIDs8partDF = clickstreamNoIDsDF.repartition(8)
+clickstreamNoIDs8partDF.registerTempTable("Clickstream")
+sqlContext.cacheTable("Clickstream")
+
An ideal partition size in Spark is about 50 MB - 200 MB. +The cache gets stored in Project Tungsten binary compressed columnar format.
+ + + + + + + + + + + + + + + +Here is a complete set of example on how to use DL4J (Deep Learning for Java) that uses UIMA on the SPARK platform
+ +and in the following project the use of CTAKES UIMA module from within the Spark framework
+Natural Language Processing with Apache Spark
+Connect to Zeppelin using the same SSH tunneling method to connect to other web servers on the master node. Zeppelin server is found at port 8890.
+ + + + + + + + + + + + + + + + +Package a jar containing your application:
+ +Don't use sbt run
Then use [spark submit] ( https://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit ) + to run your application
+YOUR_SPARK_HOME/bin/spark-submit \
+ --class "SimpleApp" \
+ --master local[4] \
+ target/scala-2.11/simple-project_2.11-1.0.jar
+
Open the Spark UI to monitor: https://localhost:4040
+The Sbt Plugin for Spark Packages is a Sbt plugin that aims to simplify the use and development of Spark Packages.
+ +Note: does not work with IntelliJ 2018.1
+The IntelliJ plugin for Spark supports for deployment spark application and cluster monitoring.
+The following procedure creates a cluster with Spark installed.
+Choose Create cluster to use Quick Create.
+For the Software Configuration field, choose Amazon Release Version emr-5.0.0 or later.
+Simple cluster:
+aws emr create-cluster --name "Spark cluster" --release-label emr-5.0.0 --applications Name=Spark \
+--ec2-attributes KeyName=myKey --instance-type m3.xlarge --instance-count 3 --use-default-roles
+
Note: For Windows, replace the above Linux line continuation character () with the caret (^).
+When using a config file:
+aws emr create-cluster --release-label --applications Name=Spark \
+--instance-type m3.xlarge --instance-count 3 --configurations https://s3.amazonaws.com/mybucket/myfolder/myConfig.json
+
Sample myConfig.json:
+[
+ {
+ "Classification": "spark",
+ "Properties": {
+ "maximizeResourceAllocation": "true"
+ }
+ }
+]
+
Using Spot instances:
+aws emr create-cluster --name "Spot cluster" --release-label emr-5.0.0 --applications Name=Spark \
+--use-default-roles --ec2-attributes KeyName=myKey \
+--instance-groups InstanceGroupType=MASTER,InstanceType=m3.xlarge,InstanceCount=1,BidPrice=0.25 \
+InstanceGroupType=CORE,BidPrice=0.03,InstanceType=m3.xlarge,InstanceCount=2
+
+# InstanceGroupType=TASK,BidPrice=0.10,InstanceType=m3.xlarge,InstanceCount=3
+
In Java:
+// start Spark on EMR in java
+AmazonElasticMapReduceClient emr = new AmazonElasticMapReduceClient(credentials);
+Application sparkApp = new Application() .withName("Spark");
+Applications myApps = new Applications();
+myApps.add(sparkApp);
+RunJobFlowRequest request = new RunJobFlowRequest() .withName("Spark Cluster") .withApplications(myApps) .withReleaseLabel("") .withInstances(new JobFlowInstancesConfig() .withEc2KeyName("myKeyName") .withInstanceCount(1) .withKeepJobFlowAliveWhenNoSteps(true) .withMasterInstanceType("m3.xlarge") .withSlaveInstanceType("m3.xlarge") ); RunJobFlowResult result = emr.runJobFlow(request);
+
To connect to the master node using SSH, you need the public DNS name of the master node and your Amazon EC2 key pair private key. The Amazon EC2 key pair private key is specified when you launch the cluster.
+The output lists your clusters including the cluster IDs. Note the cluster ID for the cluster to which you are connecting.
+"Status": { "Timeline": { "ReadyDateTime": 1408040782.374, "CreationDateTime": 1408040501.213 }, "State": "WAITING", "StateChangeReason": { "Message": "Waiting after step completed" } }, "NormalizedInstanceHours": 4,"Id": "j-2AL4XXXXXX5T9", "Name": "My cluster"
+
aws emr list-instances --cluster-id j-2AL4XXXXXX5T9Or:aws emr describe-clusters --cluster-id j-2AL4XXXXXX5T9
+
YARN ResourceManager: https://master-public-dns-name:8088
+Flintrock lets you persist your desired configuration to a YAML file so that you don't have to keep typing out the same options over and over at the command line.
+To setup and edit the default config file, run this:
+ +provider: ec2
+
+services:
+ spark:
+ version: 2.2.0
+
+launch:
+ num-slaves: 1
+
+providers:
+ ec2:
+ key-name: key_name
+ identity-file: /path/to/.ssh/key.pem
+ instance-type: m3.medium
+ region: us-east-1
+ ami: ami-97785bed
+ user: ec2-user
+
With a config file like that, you can now launch a cluster:
+ + + + + + + + + + + + + + + + +Introduction to Spark on Kubernetes
+Prerequisites:
+Need Kubernetes version 1.6 and above.
+To check the version, enter kubectl version
.
The cluster must be configured to use the kube-dns addon. Check with
+$ bin/spark-submit \
+ --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
+ --deploy-mode cluster \
+ --name spark-pi \
+ --class org.apache.spark.examples.SparkPi \
+ --conf spark.executor.instances=3 \
+ --conf spark.kubernetes.container.image=<spark-image> \
+ local:///path/to/examples.jar
+
Use kubectl cluster-info
to get the K8s API server URL
Spark (starting with version 2.3) ships with a Dockerfile in the kubernetes/dockerfiles/
directory.
Then go to https://localhost:4040
+Unix tools on Windows: Cygwin
+Putty SSH client for Windows doc
+Download and install PuTTY link. Be sure to install the entire suite.
+Save private key
+login as: ec2-user (Amazon Linux) or: ubuntu
+ +Use a shell script to configure the instance link
+User data: You can specify user data to configure an instance during launch, or to run a configuration script. To attach a file, select the "As file" option and browse for the file to attach.
+GUI tools to upload / manage files:
+Command-line s3 clients:
+1) Use Case
+2) Tools
+ +3) Get data into Redshift:
+Tables have ‘keys’ that define how the data is split across slices. The recommended practice is to split based upon commonly-joined columns, so that joined data resides on the same slice, thus avoiding the need to move data between systems.
+4) Examples:
+COPY table1 FROM 's3://bucket1/' credentials 'aws_access_key_id=abc;aws_secret_access_key=xyz' delimiter '|' gzip removequotes truncatecolumns maxerror 1000
+SELECT DISTINCT field1 FROM table1
+SELECT COUNT(DISTINCT field2) FROM table1
+
The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your application. Coordinating tasks across the application involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state.
+When using Amazon SWF, you implement workers to perform tasks. These workers can run either on cloud infrastructure, such as Amazon Elastic Compute Cloud (Amazon EC2), or on your own premises. You can create tasks that are long-running, or that may fail, time out, or require restarts—or that may complete with varying throughput and latency. Amazon SWF stores tasks and assigns them to workers when they are ready, tracks their progress, and maintains their state, including details on their completion. To coordinate tasks, you write a program that gets the latest state of each task from Amazon SWF and uses it to initiate subsequent tasks. Amazon SWF maintains an application's execution state durably so that the application is resilient to failures in individual components. With Amazon SWF, you can implement, deploy, scale, and modify these application components independently.
+Amazon SWF offers capabilities to support a variety of application requirements. It is suitable for a range of use cases that require coordination of tasks, including media processing, web application back-ends, business process workflows, and analytics pipelines.
+ + + + + + + + + + + + + + + +Building a Dynamic DNS for Route 53 using CloudWatch Events and Lambda
+Lambkin - CLI tool for generating and managing simple functions in AWS Lambda
+ + + + + + + + + + + + + + + + + + +# NodeJS
+serverless create -p [SERVICE NAME] -t aws-nodejs
+
+# C#
+serverless create --path serverlessCSharp --template aws-csharp
+
This is a convenience method to install a pre-made Serverless Service locally by downloading the Github repo and unzipping it.
+ +Use this when you have made changes to your Functions, Events or Resources in serverless.yml
or you simply want to deploy all changes within your Service at the same time.
Use this to quickly overwrite your AWS Lambda code on AWS, allowing you to develop faster.
+ +Invokes an AWS Lambda Function on AWS and returns logs.
+ +Open up a separate tab in your console and stream all logs for a specific Function using this command.
+ + + + + + + + + + + + + + + + + + + +docker run --rm -p <port>:<port> <docker image>:<tag>
+docker ps
+# cleanup
+docker kill <container>
+
To override the entrypoint, use:
+ +The above assumes you are using cygwin / git bash on Windows.
+Useful options:
+--restart=Never
if the pod has a console: -i --tty --command -- bash
Attach to the (first) container in the Pod:
+If there are multiple containers in the pod, use: -c <container name>
Deploying scala sbt microservice to Kubernetes
+ + + +sbt
-built app on Kubernetes (MiniKube)¶target/universal
The dist
task builds a binary version of your application that you can deploy to a server without any dependency on SBT, the only thing the server needs is a Java installation.
Prerequisites: minikube
, kubectl
, docker
client and helm
should be installed
Verify the output under target/docker
Start minikube
Also consider enabling heapster
kubectl
is properly configuredIt should return one node.
+Just make sure you tag your Docker image with something other than ‘latest’ and use that tag while you pull the image.
+Otherwise, if you do not specify version of your image, it will be assumed as :latest
, with pull image policy of Always
correspondingly, which may eventually result in ErrImagePull as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet.
If needed, remove previously built images from the local Docker server with sbt docker:clean
or docker rmi <image>
.
+To view the list of Docker images, run docker images
Build the Docker image and publish it to Kubernetes' Docker server.
+and if that looks OK
+ +or specify a release name:
+ +minikube
More details via:
+ +kubectl get pods
+kubectl port-forward <pod name> 8080:<target port on pod>
+curl -v https://localhost:8080/api
+
kubectl port-forward
also allows using resource name, such as a service name, to select a matching pod to port forward to
values.yaml
in the Helm chart root folderSee Blog
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
+
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=john-cd.com"
+
Note: To find myhost.com for minikube, run the following commands:
+ +kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}
+
+kubectl create secret tls my-secret --key tls.key --cert tls.crt
+
Add under spec:
in
Find and delete all nginx pods to force the nginx.conf
to update and reflect the ingress changes. Find the ingress pods with the following:
using kubectl
A ConfigMap stores K8s-specific configuration that can be mounted as volume or used in env variables. +It is often used to provide production configuration: application configuration, log settings, etc...
+kubectl create configmap app-conf --from-file=<path to config files> # create a ConfigMap from multiple files in the same directory.
+
You may need a Secret to store database passwords and secret keys.
+For applications using the Play Framework, generate a secret using:
+secretText = $(sbt playGenerateSecret)
+regex = "Generated new secret: (.+)$"
+f [[ $secretText =~ $regex ]]
+ then
+ secret = "${BASH_REMATCH[1]}"
+ echo $secret
+ kubectl create secret generic application-secret --from-literal=application_secret=$secret
+ kubectl get secrets
+ else
+ echo "$secretText doesn't match" >&2
+ fi
+done
+
minikube
provides its own ingress controller via the Ingress add-on:Enabling the add-on provisions the following:
+or with stat collection enabled for Prometheus
+helm install --name nginx-ingress-release stable/nginx-ingress \
+ --set controller.stats.enabled=true \
+ --set controller.metrics.enabled=true
+
See explanations and documentation
+The nginx ingress controller requires a 404-server like this
+A Docker image is a read-only template. For example, an image could contain an Ubuntu operating system with Apache and your web application installed. Images are used to create Docker containers. Docker provides a simple way to build new images or update existing images, or you can download Docker images that other people have already created. Docker images are the buildcomponent of Docker.
+Docker registries hold images.
+To show only running containers use:
+ +To show all containers use:
+ +Show last started container:
+ +Download an image:
+ +Create then start a container: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
+ * Docker run reference
Run with interactive terminal (i = interactive t = terminal):
+ +Start then detach the container (daemonize):
+ +If you want a transient container, docker run --rm
will remove the container after it stops.
Looks inside the container (use -f
to act like tail -f
):
Stop container:
+ +Delete container:
+ +To check the environment:
+ +Docker version / info:
+ +-p 80:5000
would map port 80 on our local host to port 5000 inside our container.
Full format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
+ +Both hostPort and containerPort can be specified as a range of ports. When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range, for example: -p1234-1236:1234-1236/tcp
The -P
flag tells Docker to map any required network ports inside our container to our host (using random ports).
--link <name or id>:alias
where name is the name of the container we’re linking to and alias is an alias for the link name.
+The --link
flag also takes the form: --link <name or id>
docker run -d --name myES -p 9200:9200 -p 9300:9300 elasticsearch
+docker run --name myK --link myES:elasticsearch -p 5601:5601 -d docker-kibana-sense
+
Find out the container’s IP address:
+ +Create a new volume inside a container at /webapp:
+ +You can also use the VOLUME instruction in a Dockerfile to add one or more new volumes to any container created from that image.
+Mount the host directory, /src/webapp
, into the container at /opt/webapp
.
On Windows, use: docker run -v /c/Users/<path>:/<container path> ...
How to create your first Helm chart
+./helm create <folder containing chart>
+
+./helm lint <folder>
+
+./helm install --dry-run --debug <folder>
+
Create requirements.yaml
Add a remote repo
+ +and, from the chart directory, run:
+ + + + + + + + + + + + + + + + + + + +minikube
¶minikube ip
commandkubectl
¶kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
+
kubectl get
- list resources.kubectl get deployment
to get all deploymentskubectl get pods -l app=nginx
to get pods with label "app: nginx"kubectl describe
- show detailed information about a resourcekubectl logs
- print the logs from a container in a podkubectl exec
- execute a command on a container in a podWhen using a single VM of Kubernetes, it’s really handy to reuse the minikube’s built-in Docker daemon
+ +Just make sure you tag your Docker image with something other than ‘latest’ and use that tag while you pull the image.
+Otherwise, if you do not specify version of your image, it will be assumed as :latest
, with pull image policy of Always
correspondingly, which may eventually result in ErrImagePull as you may not have any versions of your Docker image out there in the default docker registry (usually DockerHub) yet.
A Docker client is required to publish built docker images to the Docker daemon running inside of minikube. +See installing Docker for instructions for your platform.
+ + + + + + + + + + + + + + + + + + +Information about how to run each container, such as the container image version or specific ports to use
+Nodes: A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster.
+Deployment - The most common way of running X copies (Pods) of your application. Supports rolling updates to your container images.
+Service - By itself, a Deployment can’t receive traffic. Setting up a Service is one of the simplest ways to configure a Deployment to receive and loadbalance requests. Depending on the type of Service used, these requests can come from external client apps or be limited to apps within the same cluster. A Service is tied to a specific Deployment using label selection.
+Labels - Identifying metadata that you can use to sort and select sets of API objects. Labels have many applications, including the following:
+To keep the right number of replicas (Pods) running in a Deployment. The specified label is used to stamp the Deployment’s newly created Pods (as the value of the spec.template.labels
configuration field), and to query which Pods it already manages (as the value of spec.selector.matchLabels
).
apiVersion: v1
+kind: Service
+metadata:
+ name: p2p-robot-service
+spec:
+ selector:
+ app: p2p-robot
+ ports:
+ - name: http
+ protocol: TCP
+ port: 80
+ targetPort: http # can a text label (port name) or port number
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: p2p-robot-deployment
+spec:
+ selector:
+ matchLabels:
+ app: p2p-robot
+ replicas: 2 # tells deployment to run 2 pods matching the template
+ template: # create pods using pod definition in this template
+ metadata:
+ # the name is not included in the meta data as a unique name is
+ # generated from the deployment name
+ labels:
+ app: p2p-robot # label used above in matchLabels
+ spec:
+ containers:
+ - name: p2p-robot
+ image: "johncd/p2p-robot:1.0.0"
+ imagePullPolicy: IfNotPresent
+ ports:
+ - containerPort: 9000
+ name: http
+ env:
+ - name: APPLICATION_SECRET # Place the application secret in an environment variable, which is read in application.conf
+ valueFrom:
+ secretKeyRef:
+ name: application-secret
+ key: application_secret
+ volumeMounts:
+ - name: conf-volume
+ mountPath: /usr/local/etc
+ volumes:
+ - name: conf-volume
+ configMap: # The configMap resource provides a way to inject configuration data into Pods.
+ name: app-conf
+
# Ingress
+# https://kubernetes.io/docs/concepts/services-networking/ingress/
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: test
+ annotations:
+ ingress.kubernetes.io/rewrite-target: /
+ kubernetes.io/ingress.class: nginx # Use the nginx-ingress Ingress controller
+spec:
+ tls:
+ - secretName: ingresssecret # Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS
+ rules:
+ - http:
+ paths:
+ - path: /api
+ backend:
+ serviceName: s1
+ servicePort: 80
+---
+# Secure the Ingress by specifying a secret that contains a TLS private key and certificate.
+apiVersion: v1
+data:
+ tls.crt: base64 encoded cert
+ tls.key: base64 encoded key
+kind: Secret
+metadata:
+ name: ingresssecret
+ namespace: default
+type: Opaque
+
kubectl
in Ubuntu on Windows¶cd ~
+mkdir bin
+curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
+chmod +x ./kubectl
+# optionally
+sudo mv ./kubectl /usr/local/bin/kubectl
+# then test
+kubectl get all
+# enable autocompletion
+source <(kubectl completion bash)
+echo "source <(kubectl completion bash)" >> ~/.bashrc
+
kube-up.sh
or successfully deploy a Minikube cluster.Check that kubectl is properly configured by getting the cluster state:
+ +Beware that you may have two different config files in ~/.kube/
and /mnt/c/Users/<user name>/.kube
if you installed minikube
in Windows.
minikube
on Windows¶Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
+C:\Program Files (x86)\Kubernetes\minikube
or similar to the PATH (in System Settings
> Environment Variables
)More info at Getting Started
+kubectl
¶Use a version of kubectl that is the same version as your server or later. Using an older kubectl
with a newer server might produce validation errors.
On Windows 10 (using Git Bash):
+curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/windows/amd64/kubectl.exe
+
OR
+ +Then
+ +Run kubectl version to verify that the version you’ve installed is sufficiently up-to-date.
+ +kubectl
¶Configure kubectl
to use a remote Kubernetes cluster
~/.kube
config does not exist (it should have been created by minikube
), enter the following in Powershell:Edit the config file with a text editor of your choice.
+Check that kubectl
is properly configured by getting the cluster state:
kubectl auth can-i list pods
+kubectl auth can-i create pods
+kubectl auth can-i edit pods
+kubectl auth can-i delete pods
+
kubectl
from the Ubuntu on Windows command line¶If installed by choco
minikube
¶Running Kubernetes Locally via Minikube
+We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it via the exposed service. +To check whether the pod is up and running we can use the following:
+ +Once the pod is running, curl it:
+ +Helm is a package manager for Kubernetes. Download a binary release of the Helm client from here
+This will install Tiller (the helm server) into the current Kubernetes cluster (as listed in kubectl config current-context
).
Quora: Which is the best machine learning library for .NET?
+Deedle- Exploratory data library for .NET
+Deedle is an easy to use library for data and time series manipulation and for scientific programming. It supports working with structured data frames, ordered and unordered data, as well as time series. Deedle is designed to work well for exploratory programming using F# and C# interactive console, but can be also used in efficient compiled .NET code.
+The library implements a wide range of operations for data manipulation including advanced indexing and slicing, joining and aligning data, handling of missing values, grouping and aggregation, statistics and more.
+ +Accord.NET provides statistical analysis, machine learning, image processing and computer vision methods for .NET applications. The Accord.NET Framework extends the popular AForge.NET with new features, adding to a more complete environment for scientific computing in .NET.
+ + + + + + + + + + + + + + + + + + +Data visualization - Wikipedia
+19 Tools for Data Visualization Projects
+22 free tools for data visualization and analysis - Computerworld
+22 free tools for data visualization and analysis
+D3 provides many built-in reusable functions and function factories, such as graphical primitives for area, line and pie charts.
+ + + +Why-is-Deep-Learning-so-popular-and-in-demand-these-days
+Deep Learning for beginners (deeplearning4j)
+The best answers to your most crucial deep learning questions
+Colah's blog - Neural Networks
+ + +Deep Learning for Visual Question Answering
+Visualizing MNIST- An Exploration of Dimensionality Reduction - colah's blog
+ + + + + + + + + + + + + + + + + + + +t-distributed stochastic neighbor embedding - Wikipedia
+Lecture 10 Reinforcement Learning I
+ +PyBrain - a simple neural networks library in Python
+ +Building a Recommendation Engine- An Algorithm Tutorial - Toptal
+ + + + + + + + + + + + + + + + + + + +Cheatsheet- Scikit-Learn & Caret Package for Python & R respectively
+ + + + + + + + + + + + + + + + + + +Specifying --headerline
instructs mongoimport to determine the name of the fields using the first line in the CSV file.
+Use the --ignoreBlanks
option to ignore blank fields. For CSV and TSV imports, this option provides the desired functionality in most cases, because it avoids inserting fields with null values into your collection.
myCursor.forEach(printjson);
+
+// or
+while (myCollection.hasNext()) {
+ printjson(myCollection.next());
+}
+
// lowercase a string
+{ $project: { "address": { $toLower: "$address" } } },
+
+// extract field within embedded document
+{ $project: { "experience.location": 1 } },
+
+// flatten
+{ $unwind: "$experience"},
+{ $group: { _id: "$_id", locs: { $push: { $ifNull: [ "$experience.location", "undefined" ] } } } }
+
+// output a collection
+{ $out: "myCollection2" }
+
+// get unique values
+{ $group: { _id: "$fulladdress" } }
+
Don't use copyTo - it is fully blocking... and deprecated in 3.x
+db = db.getSiblingDB("myDB"); // set current db for $out
+var myCollection = db.getCollection("myCollection");
+
+// project if needed, get uniques if needed, create a new collection
+myCollection.aggregate([{ $project:{ "fulladdress": 1 } },{ $group:{ _id: "$fulladdress" } },{ $out: "outputCollection" }], { allowDiskUse:true });
+
var outputColl = db.getCollection( "outputCollection" );
+var outputBulk = outputColl.initializeUnorderedBulkOp();
+myCollection.find( {}, { "fulladdress": 1 } ).forEach( function(doc) {
+ outputBulk.insert(doc);
+});
+outputBulk.execute();
+
Add a count field to all records
+function gatherStats() {
+ var start = Date.now();
+
+ var inputDB = db.getSiblingDB("inputDB");
+ var inputColl = inputDB.getCollection("inputColl");
+
+ // debug: inputColl.find( {} ).limit(2).forEach(printjson);
+
+ outputDB = db.getSiblingDB("outputDB");
+ db = outputDB; // set current database for the next aggregate step
+
+ // create temporary collection with count
+ inputColl.aggregate( [
+ { $group: { _id: { $toLower: "$address" }, count: { $sum: 1 } } },
+ { $sort: { "count": -1 } },
+ { $limit: 100000 }, // limit to 100k addresses with highest count
+ { $out: "stats" }
+ ], { allowDiskUse: true } ); // returns { _id, count } where _id is the address
+
+ var statsColl = outputDB.getCollection("stats");
+
+ // create output collection
+ var outputColl = outputDB.getCollection("outputColl");
+ var outputBulk = outputColl.initializeUnorderedBulkOp();
+ var counter = 0;
+
+ var inputCursor = inputColl.find( {}, {} );
+ inputCursor.forEach( function(doc) {
+ var statDoc = statsColl.findOne( { _id: doc.address } );
+ if (statDoc) {
+ doc.count = statDoc.count;
+ outputBulk.insert(doc);
+ counter++;
+ if ( counter % 1000 == 0 ) {
+ outputBulk.execute();
+ // you have to reset
+ outputBulk = outputColl.initializeUnorderedBulkOp();
+ }
+ }
+ }
+ );
+
+ if ( counter % 1000 > 0 )
+ outputBulk.execute();
+
+
+ // print the results
+ outputColl.find({}).sort({count: -1}).forEach(printjson);
+
+ var end = Date.now();
+ var duration = (end - start)/1000;
+ printjson("Duration: " + duration + " seconds");
+
+ printjson(" | DONE | ");
+}
+
+gatherStats();
+
Alternatively move data to memory:
+ var statsDict = {}; // or better Object.create(null);
+ statsColl.find({}).forEach( function(doc) { statsDict[doc._id] = doc.count } );
+
+ // could also use: var statsArray = statsCursor.toArray();
+
+ inputCursor.forEach( function(doc) {
+ if (doc.address in statsDict)
+ {
+ doc["count"] = statsDict[doc.address];
+ outputBulk.insert(doc);
+ }
+ });
+ outputBulk.execute();
+
First column of SORTKEY should not be compressed
+Workflows: move from staging table to production table
+Compress your staging tables
+Do ANALYZE after VACUUM
+Filter:
+ +Like:
+SELECT * FROM Customers
+WHERE City LIKE 's%';
+
+SELECT * FROM Customers
+WHERE Country LIKE '%land%';
+
+SELECT * FROM Customers
+WHERE Country NOT LIKE '%land%';
+
Sort:
+SELECT * FROM Customers
+ORDER BY Country DESC;
+
+SELECT * FROM Customers
+ORDER BY Country, CustomerName;
+
Limit:
+SELECT TOP number|percent column_name(s)
+FROM table_name;
+
+-- Examples:
+SELECT TOP 2 * FROM Customers;
+
+SELECT TOP 50 PERCENT * FROM Customers;
+
Oracle Syntax:
+ +Joins:
+SELECT Customers.CustomerName, Orders.OrderID
+FROM Customers
+FULL OUTER JOIN Orders
+ON Customers.CustomerID = Orders.CustomerID
+ORDER BY Customers.CustomerName;
+
Union:
+SELECT column_name(s) FROM table1
+UNION
+SELECT column_name(s) FROM table2;
+
+SELECT column_name(s) FROM table1
+UNION ALL
+SELECT column_name(s) FROM table2;
+
Select Into:
+ +Formula:
+ +INSERT INTO table_name
+VALUES (value1,value2,value3,...);
+
+INSERT INTO table_name (column1,column2,column3,...)
+VALUES (value1,value2,value3,...);
+
+-- Example:
+
+INSERT INTO Customers (CustomerName, City, Country)
+VALUES ('Cardinal', 'Stavanger', 'Norway');
+
Insert from select:
+INSERT INTO table2(column_name(s))
+SELECT column_name(s)
+FROM table1;
+
+-- Example:
+
+INSERT INTO Customers (CustomerName, Country)
+SELECT SupplierName, Country FROM Suppliers
+WHERE Country='Germany';
+
UPDATE table_name
+SET column1=value1,column2=value2,...
+WHERE some_column=some_value;
+
+-- Example:
+
+UPDATE Customers
+SET ContactName='Alfred Schmidt', City='Hamburg'
+WHERE CustomerName='Alfreds Futterkiste';
+
DELETE FROM table_name
+WHERE some_column=some_value;
+
+DELETE FROM Customers
+WHERE CustomerName='Alfreds Futterkiste' AND ContactName='Maria Anders';
+
Create:
+CREATE TABLE table_name
+(
+column_name1 data_type(size),
+column_name2 data_type(size),
+column_name3 data_type(size),
+....
+);
+
+CREATE TABLE table_name
+(
+column_name1 data_type(size) constraint_name,
+column_name2 data_type(size) constraint_name,
+column_name3 data_type(size) constraint_name,
+....
+);
+
-- Examples
+CREATE TABLE Persons
+(
+P_Id int NOT NULL UNIQUE,
+LastName varchar(255) NOT NULL,
+FirstName varchar(255),
+Address varchar(255),
+City varchar(255)
+)
+
+CREATE TABLE Persons
+(
+P_Id int NOT NULL,
+LastName varchar(255) NOT NULL,
+FirstName varchar(255),
+Address varchar(255),
+City varchar(255),
+CONSTRAINT uc_PersonID UNIQUE (P_Id, LastName)
+)
+
ALTER TABLE Persons
+ADD CONSTRAINT uc_PersonID UNIQUE (P_Id,LastName)
+
+ALTER TABLE Persons
+DROP CONSTRAINT uc_PersonID
+
Temporary Table:
+ +Drop / Truncate:
+ +CREATE TABLE Persons
+(
+P_Id int NOT NULL PRIMARY KEY,
+LastName varchar(255) NOT NULL,
+FirstName varchar(255),
+Address varchar(255),
+City varchar(255)
+)
+
+CREATE TABLE Persons
+(
+P_Id int NOT NULL,
+LastName varchar(255) NOT NULL,
+FirstName varchar(255),
+Address varchar(255),
+City varchar(255),
+CONSTRAINT PK_PersonID PRIMARY KEY (P_Id,LastName)
+)
+
+ALTER TABLE Persons
+ADD CONSTRAINT PK_PersonID PRIMARY KEY (P_Id,LastName)
+
+ALTER TABLE Persons
+DROP CONSTRAINT PK_PersonID
+
CREATE TABLE Orders
+(
+O_Id int NOT NULL PRIMARY KEY,
+OrderNo int NOT NULL,
+P_Id int FOREIGN KEY REFERENCES Persons(P_Id)
+)
+
+CREATE TABLE Orders
+(
+O_Id int NOT NULL,
+OrderNo int NOT NULL,
+P_Id int,
+PRIMARY KEY (O_Id),
+CONSTRAINT FK_PerOrders FOREIGN KEY (P_Id)
+REFERENCES Persons(P_Id)
+)
+
ALTER TABLE Orders
+ADD FOREIGN KEY (P_Id)
+REFERENCES Persons(P_Id)
+
+ALTER TABLE Orders
+ADD CONSTRAINT fk_PerOrders
+FOREIGN KEY (P_Id)
+REFERENCES Persons(P_Id)
+
+ALTER TABLE Orders
+DROP CONSTRAINT fk_PerOrders
+
CREATE TABLE Persons
+(
+P_Id int NOT NULL CHECK (P_Id>0),
+LastName varchar(255) NOT NULL,
+FirstName varchar(255),
+Address varchar(255),
+City varchar(255)
+)
+
+CREATE TABLE Persons
+(
+P_Id int NOT NULL,
+LastName varchar(255) NOT NULL,
+FirstName varchar(255),
+Address varchar(255),
+City varchar(255),
+CONSTRAINT chk_Person CHECK (P_Id>0 AND City='Sandnes')
+)
+
ALTER TABLE Persons
+ADD CONSTRAINT CHK_Person CHECK (P_Id>0 AND City='Sandnes')
+
+ALTER TABLE Persons
+DROP CONSTRAINT CHK_Person
+
CREATE TABLE Orders
+(
+O_Id int NOT NULL,
+OrderNo int NOT NULL,
+P_Id int,
+OrderDate date DEFAULT GETDATE()
+)
+
ALTER TABLE Persons
+ALTER COLUMN City SET DEFAULT 'SEATTLE'
+
+ALTER TABLE Persons
+ALTER COLUMN City DROP DEFAULT
+
CREATE UNIQUE INDEX index_name
+ON table_name (column_name)
+
+CREATE INDEX index_name
+ON table_name (column_name1, col_name2)
+
+-- Example:
+
+CREATE INDEX PIndex
+ON Persons (LastName, FirstName)
+
DROP INDEX table_name.index_name
+
+-- Example:
+
+DROP INDEX IX_ProductVendor_BusinessEntityID
+ ON Purchasing.ProductVendor;
+
ALTER TABLE table_name
+ADD column_name datatype
+
+ALTER TABLE table_name
+DROP COLUMN column_name
+
+ALTER TABLE table_name
+ALTER COLUMN column_name datatype
+
CREATE TABLE Persons
+(
+ID int IDENTITY(1,1) PRIMARY KEY,
+LastName varchar(255) NOT NULL,
+FirstName varchar(255),
+Address varchar(255),
+City varchar(255)
+)
+
Example:
+CREATE TABLE dbo.PurchaseOrderDetail
+(
+ PurchaseOrderID int NOT NULL
+ REFERENCES Purchasing.PurchaseOrderHeader(PurchaseOrderID),
+ LineNumber smallint NOT NULL,
+ ProductID int NULL
+ REFERENCES Production.Product(ProductID),
+ UnitPrice money NULL,
+ OrderQty smallint NULL,
+ ReceivedQty float NULL,
+ RejectedQty float NULL,
+ DueDate datetime NULL,
+ rowguid uniqueidentifier ROWGUIDCOL NOT NULL
+ CONSTRAINT DF_PurchaseOrderDetail_rowguid DEFAULT (newid()),
+ ModifiedDate datetime NOT NULL
+ CONSTRAINT DF_PurchaseOrderDetail_ModifiedDate DEFAULT (getdate()),
+ LineTotal AS ((UnitPrice*OrderQty)),
+ StockedQty AS ((ReceivedQty-RejectedQty)),
+ CONSTRAINT PK_PurchaseOrderDetail_PurchaseOrderID_LineNumber
+ PRIMARY KEY CLUSTERED (PurchaseOrderID, LineNumber)
+ WITH (IGNORE_DUP_KEY = OFF)
+)
+ON PRIMARY;
+
Examples:
+CREATE VIEW [Products Above Average Price] AS
+SELECT ProductName,UnitPrice
+FROM Products
+WHERE UnitPrice > (SELECT AVG(UnitPrice) FROM Products)
+
+SELECT * FROM [Products Above Average Price]
+
CREATE VIEW [Category Sales For 1997] AS
+SELECT DISTINCT CategoryName, Sum(ProductSales) AS CategorySales
+FROM [Product Sales for 1997]
+GROUP BY CategoryName
+
GETDATE() -- Returns the current date and time
+
+DATEPART() -- Returns a single part of a date/time
+
+DATEADD() -- Adds or subtracts a specified time interval from a date
+
+DATEDIFF() -- Returns the time between two dates
+
+CONVERT() -- Displays date/time data in different formats
+
Example:
+CREATE TABLE Orders
+(
+OrderId int NOT NULL PRIMARY KEY,
+ProductName varchar(50) NOT NULL,
+OrderDate datetime NOT NULL DEFAULT GETDATE()
+)
+
+SELECT DATEPART(yyyy,OrderDate) AS OrderYear,
+DATEPART(mm,OrderDate) AS OrderMonth,
+DATEPART(dd,OrderDate) AS OrderDay,
+FROM Orders
+WHERE OrderId=1
+
+SELECT OrderId,DATEADD(day,45,OrderDate) AS OrderPayDate
+FROM Orders
+
+SELECT DATEDIFF(day,'2008-06-05','2008-08-05') AS DiffDate
+
+CONVERT(VARCHAR(19),GETDATE())
+CONVERT(VARCHAR(10),GETDATE(),10)
+CONVERT(VARCHAR(10),GETDATE(),110)
+
Data type / Description / Storage
+char(n)
+Fixed width character string. Maximum 8,000 characters
+Defined width
varchar(n)
+Variable width character string. Maximum 8,000 characters
+2 bytes + number of chars
varchar(max)
+Variable width character string. Maximum 1,073,741,824 characters
+2 bytes + number of chars
text
+Variable width character string. Maximum 2GB of text data
+4 bytes + number of chars
nchar
+Fixed width Unicode string. Maximum 4,000 characters
+Defined width x 2
nvarchar
+Variable width Unicode string. Maximum 4,000 characters
nvarchar(max)
+Variable width Unicode string. Maximum 536,870,912 characters
ntext
+Variable width Unicode string. Maximum 2GB of text data
bit
+Allows 0, 1, or NULL
binary(n)
+Fixed width binary string. Maximum 8,000 bytes
varbinary
+Variable width binary string. Maximum 8,000 bytes
varbinary(max)
+Variable width binary string. Maximum 2GB
image
+Variable width binary string. Maximum 2GB
tinyint
+Allows whole numbers from 0 to 255
+1 byte
smallint
+Allows whole numbers between -32,768 and 32,767
+2 bytes
int
+Allows whole numbers between -2,147,483,648 and 2,147,483,647
+4 bytes
bigint
+Allows whole numbers between -9,223,372,036,854,775,808 and 9,223,372,036,854,775,807
+8 bytes
decimal(p,s)
+Fixed precision and scale numbers.
+Allows numbers from -10^38 +1 to 10^38.
The p parameter indicates the maximum total number of digits that can be stored (both to the left and to the right of the decimal point). p must be a value from 1 to 38. Default is 18. The s parameter indicates the maximum number of digits stored to the right of the decimal point. s must be a value from 0 to p. Default value is 0. +5-17 bytes
+numeric(p,s)
+Fixed precision and scale numbers.
+Allows numbers from -10^38 +1 to 10^38.
The p parameter indicates the maximum total number of digits that can be stored (both to the left and to the right of the decimal point). p must be a value from 1 to 38. Default is 18. The s parameter indicates the maximum number of digits stored to the right of the decimal point. s must be a value from 0 to p. Default value is 0. +5-17 bytes
+smallmoney
+Monetary data from -214,748.3648 to 214,748.3647
+4 bytes
money
+Monetary data from -922,337,203,685,477.5808 to 922,337,203,685,477.5807
+8 bytes
float(n)
+Floating precision number data from -1.79E + 308 to 1.79E + 308.
+The n parameter indicates whether the field should hold 4 or 8 bytes. float(24) holds a 4-byte field and float(53) holds an 8-byte field. Default value of n is 53.
+4 or 8 bytes
real
+Floating precision number data from -3.40E + 38 to 3.40E + 38
+4 bytes
datetime
+From January 1, 1753 to December 31, 9999 with an accuracy of 3.33 milliseconds
+8 bytes
datetime2
+From January 1, 0001 to December 31, 9999 with an accuracy of 100 nanoseconds
+6-8 bytes
smalldatetime
+From January 1, 1900 to June 6, 2079 with an accuracy of 1 minute
+4 bytes
date
+Store a date only. From January 1, 0001 to December 31, 9999
+3 bytes
time
+Store a time only to an accuracy of 100 nanoseconds
+3-5 bytes
datetimeoffset
+The same as datetime2 with the addition of a time zone offset
+8-10 bytes
timestamp
+Stores a unique number that gets updated every time a row gets created or modified. The timestamp value is based upon an internal clock and does not correspond to real time. Each table may have only one timestamp variable
sql_variant
+Stores up to 8,000 bytes of data of various data types, except text, ntext, and timestamp
uniqueidentifier
+Stores a globally unique identifier (GUID)
xml
+Stores XML formatted data. Maximum 2GB
cursor
+Stores a reference to a cursor used for database operations
table
+Stores a result-set for later processing
SQL aggregate functions return a single value, calculated from values in a column.
+Useful aggregate functions:
+AVG()
- Returns the average valueCOUNT()
- Returns the number of rowsTOP 1
- Single sampleMAX()
- Returns the largest valueMIN()
- Returns the smallest valueSUM()
- Returns the sumExamples:
+SELECT COUNT(DISTINCT column_name) FROM table_name;
+
+SELECT TOP 1 column_name FROM table_name
+ORDER BY column_name DESC;
+
+SELECT column_name, aggregate_function(column_name)
+FROM table_name
+WHERE column_name operator value
+GROUP BY column_name;
+
+SELECT Shippers.ShipperName, COUNT(Orders.OrderID) AS NumberOfOrders
+FROM Orders
+LEFT JOIN Shippers
+ON Orders.ShipperID=Shippers.ShipperID
+GROUP BY ShipperName;
+
+SELECT column_name, aggregate_function(column_name)
+FROM table_name
+WHERE column_name operator value
+GROUP BY column_name
+HAVING aggregate_function(column_name) operator value;
+
+SELECT Employees.LastName, COUNT(Orders.OrderID) AS NumberOfOrders
+FROM Orders
+INNER JOIN Employees
+ON Orders.EmployeeID=Employees.EmployeeID)
+GROUP BY LastName
+HAVING COUNT(Orders.OrderID) > 10;
+
CREATE FUNCTION FunctionName
+(
+-- Add the parameters for the function here
+@p1 int
+)
+RETURNS int
+AS
+BEGIN
+-- Declare the return variable here
+DECLARE @Result int
+-- Add the T-SQL statements to compute the return value here
+SELECT @Result = @p1
+
+-- Return the result of the function
+RETURN @Result
+END
+
IF OBJECT_ID (N'dbo.EmployeeByID' ) IS NOT NULL
+ DROP FUNCTION dbo.EmployeeByID
+GO
+
+CREATE FUNCTION dbo.EmployeeByID(@InEmpID int)
+RETURNS @retFindReports TABLE
+(
+ -- columns returned by the function
+ EmployeeID int NOT NULL,
+ Name nvarchar(255 ) NOT NULL,
+ Title nvarchar(50 ) NOT NULL,
+ EmployeeLevel int NOT NULL
+)
+AS
+-- body of the function
+BEGIN
+ WITH DirectReports(Name , Title , EmployeeID , EmployeeLevel , Sort ) AS
+ (SELECT CONVERT( varchar(255 ), c .FirstName + ' ' + c.LastName ),
+ e.Title ,
+ e.EmployeeID ,
+ 1 ,
+ CONVERT(varchar (255), c. FirstName + ' ' + c .LastName)
+ FROM HumanResources.Employee AS e
+ JOIN Person.Contact AS c ON e.ContactID = c.ContactID
+ WHERE e.EmployeeID = @InEmpID
+ UNION ALL
+ SELECT CONVERT (varchar( 255), REPLICATE ( '| ' , EmployeeLevel) +
+ c.FirstName + ' ' + c. LastName),
+ e.Title ,
+ e.EmployeeID ,
+ EmployeeLevel + 1,
+ CONVERT ( varchar(255 ), RTRIM (Sort) + '| ' + FirstName + ' ' +
+ LastName)
+ FROM HumanResources.Employee as e
+ JOIN Person.Contact AS c ON e.ContactID = c.ContactID
+ JOIN DirectReports AS d ON e. ManagerID = d. EmployeeID
+ )
+ -- copy the required columns to the result of the function
+
+ INSERT @retFindReports
+ SELECT EmployeeID, Name, Title, EmployeeLevel
+ FROM DirectReports
+ ORDER BY Sort
+ RETURN
+END
+GO
+
CREATE PROCEDURE ProcedureName
+ -- Add the parameters for the stored procedure here
+ @p1 int = 0 ,
+ @p2 int = 0
+AS
+BEGIN
+ -- SET NOCOUNT ON added to prevent extra result sets from
+ -- interfering with SELECT statements.
+ SET NOCOUNT ON;
+
+ -- Insert statements for procedure here
+ SELECT @p1 , @p2
+END
+GO
+
Q. Here's the data in a table 'orders'
+customer_id order_id order_day
+123 27424624 25Dec2011
+123 89690900 25Dec2010
+797 12131323 25Dec2010
+876 67145419 15Dec2011
+
Could you give me SQL for customers who placed orders on both the days, 25th Dec 2010 and 25th Dec 2011?
+ + + + + + + + + + + + + + + + + + + +Why we use Terraform and not Chef, Puppet, Ansible, SaltStack, or CloudFormation
+YAML notation for folded text: >
data: >
+ Wrapped text
+ will be folded
+ into a single
+ paragraph
+
+ Blank lines denote
+ paragraph breaks
+
Templates for the US East (Northern Virginia) Region
+ + + +Free Templates for AWS CloudFormation (Cloudonaut)
+Deploying Microservices with Amazon ECS, AWS CloudFormation, and an Application Load Balancer
+---
+AWSTemplateFormatVersion: "version date"
+
+Description:
+ String
+
+Metadata:
+ template metadata
+
+Parameters:
+ set of parameters
+
+Mappings:
+ set of mappings
+
+Conditions:
+ set of conditions
+
+Transform:
+ set of transforms
+
+Resources:
+ set of resources
+
+Outputs:
+ set of outputs
+
With examples:
+---
+AWSTemplateFormatVersion: "2010-09-09"
+
+Description: >
+ Here are some
+ details about
+ the template.
+
+Metadata:
+ Instances:
+ Description: "Information about the instances"
+ Databases:
+ Description: "Information about the databases"
+
+Parameters:
+ InstanceTypeParameter:
+ Type: String # String, Number, List<Number>, CommaDelimitedList e.g. "test,dev,prod", or an AWS-specific types such as Amazon EC2 key pair names and VPC IDs.
+ Default: t2.micro
+ AllowedValues:
+ - t2.micro
+ - m1.small
+ Description: Enter t2.micro or m1.small. Default is t2.micro.
+ # AllowedPattern: "[A-Za-z0-9]+" # A regular expression that represents the patterns you want to allow for String types.
+ # ConstraintDescription: Malformed input-Parameter MyParameter must match pattern [A-Za-z0-9]+
+ # MinLength: 2 # for String
+ # MaxLength: 10
+ # MinValue: 0 # for Number types.
+ # MaxValue: 100
+ # NoEcho: True
+
+Mappings:
+ RegionMap:
+ us-east-1:
+ "32": "ami-6411e20d"
+ us-west-1:
+ "32": "ami-c9c7978c"
+ eu-west-1:
+ "32": "ami-37c2f643"
+ ap-southeast-1:
+ "32": "ami-66f28c34"
+ ap-northeast-1:
+ "32": "ami-9c03a89d"
+
+Conditions:
+ CreateProdResources: !Equals [ !Ref EnvType, prod ]
+
+Transform:
+ set of transforms
+
+Resources:
+ Ec2Instance:
+ Type: AWS::EC2::Instance
+ Properties:
+ InstanceType:
+ Ref: InstanceTypeParameter # reference to parameter above
+ ImageId: ami-2f726546
+
+Outputs:
+ VolumeId:
+ Condition: CreateProdResources
+ Value:
+ !Ref NewVolume
+
Resources:
+ Ec2Instance:
+ Type: AWS::EC2::Instance
+ Properties:
+ SecurityGroups:
+ - Ref: InstanceSecurityGroup
+ KeyName: mykey
+ ImageId: ''
+ InstanceSecurityGroup:
+ Type: AWS::EC2::SecurityGroup
+ Properties:
+ GroupDescription: Enable SSH access via port 22
+ SecurityGroupIngress:
+ - IpProtocol: tcp
+ FromPort: '22'
+ ToPort: '22'
+ CidrIp: 0.0.0.0/0
+
Repo hosting:
+ +<repo>
into the folder called <directory>
on the local machine:git config --global user.name "Firstname Lastname"
+git config --global user.email "your_email@youremail.com"
+
<file>
for the next commit:<directory>
for the next commit:git diff # git diff by itself doesn’t show all changes made since your last commit – only changes that are still unstaged.
+git diff --staged # Shows file differences between staging and the last file version
+
git checkout <commit> # Return to commit
+git checkout master # Return to the master branch (or whatever branch we choose)
+
git checkout <commit> <file> # Check out the version of the file from the selected commit
+git checkout HEAD hello.py # Check out the most recent version
+
Branches are just pointers to commits.
+<branch>
.This does not check out the new branch. You need:
+ +Or direcly create-and-check out <new-branch>
.
Generate a new commit that undoes all of the changes introduced in <commit>
, then apply it to the current branch
git revert
undoes a single commit — it does not “revert” back to the previous state of a project by removing all subsequent commits.
git fetch <remote>
followed by git merge origin/<current-branch>
.git pull
or git push
.-A
, --all
finds new files as well as staging modified content and removing files that are no longer in the working tree.
git add -A
+git commit -m "Add repo instructions"
+git push -u origin master
+git pull
+ssh -p 2222 user@domain.com
+
new-feature
branchIf local history has diverged from the central repository, Git will refuse the request.
+ +The --bare
flag creates a repository that doesn’t have a working directory, making it impossible to edit files and commit changes in that repository. Central repositories should always be created as bare repositories because pushing branches to a non-bare repository has the potential to overwrite changes.
Tools to build complex pipelines of batch jobs. They handle dependency resolution, workflow management, visualization.
+Petabyte-Scale Data Pipelines with Docker, Luigi and Elastic Spot Instances
+ + + + + + + + + + + + + + + + + + + +In Windows 10 WSL, install sdkman
sudo apt install zip
+curl -s "https://get.sdkman.io" | bash
+source "$HOME/.sdkman/bin/sdkman-init.sh"
+sdk version
+
Install gradle
+ +Create a gradle project (for Java)
+ +You can now use ./gradlew
or gradlew.bat
in the project folder
./gradlew tasks
in your project directory lists which tasks you can run in your project, such as building or running your code../gradlew projects
./gradlew properties
Most commonly used Java tasks:
+./gradlew build
will compile your project's code into a /build folder../gradlew run
will run the compiled code in your build folder../gradlew clean
will purge that build folder../gradlew test
will execute unit tests without building or running your code again.gradle.properties
file and configures Gradle accordinglysettings.gradle
file against the Settings objectbuild.gradle
file against its projectIn case of a multi-project build, we'd probably have multiple different build.gradle
files, one for each project.
+The build.gradle
file is executed against a Project instance, with one Project instance created per subproject.
Every Gradle build is made up of one or more projects. What a project represents depends on what it is that you are doing with Gradle. For example, a project might represent a library JAR or a web application.
+Each project is made up of one or more tasks. A task represents some atomic piece of work which a build performs. This might be compiling some classes, creating a JAR, generating Javadoc, or publishing some archives to a repository.
+Tasks are snippets that we can run directly from the command line in our project directory via ./gradlew [TASK_NAME]
task copy(type: Copy, group: "Custom", description: "Copies sources to the dest directory") {
+ from "src"
+ into "dest"
+}
+
+// accessing task properties
+println copy.destinationDir
+println project.copy.destinationDir
+
task('copy2', type: Copy) {
+ description 'Copies the resource directory to the target directory.'
+ from(file('src'))
+ into(buildDir)
+ include('**/*.txt', '**/*.xml', '**/*.properties')
+ timeout = Duration.ofMillis(50000)
+}
+
task hello {
+ group = 'Worthless tasks'
+ description = 'An utterly useless task'
+ // extra (custom) properties
+ ext.myProperty = "myValue"
+ doLast {
+ println 'Hello world!'
+ }
+}
+
+// API call
+hello.doLast {
+ println "Greetings from the $hello.name task." // accessing task property in interpolated string
+}
+
+hello.configure {
+ doLast {
+ println 'Hello again'
+ }
+}
+
+task next {
+ dependsOn hello // or 'hello' if lazy initialized - task dependency
+ doLast {
+ println hello.myProperty
+ }
+}
+
// dynamic tasks
+4.times { counter ->
+ task "task$counter" {
+ doLast {
+ println "I'm task number $counter"
+ }
+ }
+}
+
+// default tasks
+defaultTasks 'task0', 'task1'
+
import org.apache.commons.codec.binary.Base64
+
+// dependencies for the build script
+buildscript {
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ classpath group: 'commons-codec', name: 'commons-codec', version: '1.2'
+ }
+}
+
+task encode {
+ doLast {
+ // using the build script dependencies
+ def byte[] encodedString = new Base64().encode('hello world\n'.getBytes())
+ println new String(encodedString)
+ }
+}
+
+// you can also declare methods
+File[] fileList(String dir) {
+ file(dir).listFiles({file -> file.isFile() } as FileFilter).sort()
+}
+
Essential plugins for Java:
+ +repositories {
+ jcenter() // or mavenCentral()
+}
+
+dependencies {
+ implementation 'com.google.guava:guava:26.0-jre' // implementation is a configuration defined by the java plugin
+ compile group: 'mysql', name: 'mysql-connector-java', version: '5.1.13'
+
+ testImplementation 'junit:junit:4.12'
+}
+
+// pointer to the Java entrypoint
+mainClassName="com.someorg.someprj.App"
+
There are seven levels of logging defined within the API: OFF, DEBUG, INFO, ERROR, WARN, FATAL, and ALL.
+$ gunzip apache-log4j-1.2.15.tar.gz
+$ tar -xvf apache-log4j-1.2.15.tar
+$ pwd
+/usr/local/apache-log4j-1.2.15
+$ export CLASSPATH=$CLASSPATH:/usr/local/apache-log4j-1.2.15/log4j-1.2.15.jar
+$ export PATH=$PATH:/usr/local/apache-log4j-1.2.15/
+
<dependencies>
+<dependency>
+<groupId>org.apache.logging.log4j</groupId>
+<artifactId>log4j-api</artifactId>
+<version>2.6.1</version>
+</dependency>
+<dependency>
+<groupId>org.apache.logging.log4j</groupId>
+<artifactId>log4j-core</artifactId>
+<version>2.6.1</version>
+</dependency>
+</dependencies>
+
All the libraries should be available in CLASSPATH and yourlog4j.properties file should be available in PATH.
+# Define the root logger with appender file
+log = /usr/home/log4j
+log4j.rootLogger = WARN, FILE
+
+# Define the file appender
+log4j.appender.FILE=org.apache.log4j.FileAppender
+log4j.appender.FILE.File=${log}/log.out
+
+# Define the layout for file appender
+log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
+log4j.appender.FILE.layout.conversionPattern=%m%n
+
import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+public class MyTest {
+
+ private static final Logger logger = LogManager.getLogger(); // equiv to LogManager.getLogger(MyTest.class);
+ private static final Logger logger = LogManager.getLogger("HelloWorld");
+
+ public static void main(String[] args) {
+ logger.setLevel(Level.WARN);
+ logger.info("Hello, World!");
+ // string interpolation
+ logger.debug("Logging in user {} with birthday {}", user.getName(), user.getBirthdayCalendar());
+
+ // pre-Java 8 style optimization: explicitly check the log level
+ // to make sure the expensiveOperation() method is only called if necessary
+ if (logger.isTraceEnabled()) {
+ logger.trace("Some long-running operation returned {}", expensiveOperation());
+ }
+
+ // Java-8 style optimization: no need to explicitly check the log level:
+ // the lambda expression is not evaluated if the TRACE level is not enabledlogger.trace("Some long-running operation returned {}", () -> expensiveOperation());
+ }
+}
+
+// FORMATTER LOGGER
+public static Logger logger = LogManager.getFormatterLogger("Foo");
+
+logger.debug("Logging in user %s with birthday %s", user.getName(), user.getBirthdayCalendar());
+logger.debug("Logging in user %1$s with birthday %2$tm %2$te,%2$tY", user.getName(), user.getBirthdayCalendar());
+//
+logger.debug("Logging in user {} with birthday {}", user.getName(), user.getBirthdayCalendar());
+
It contains user-specific configuration for authentication, repositories, and other information to customize the behavior of Maven.
+This directory contains your local Maven repository. When you download a dependency from a remote Maven repository, Maven stores a copy of the dependency in your local repository.
+Introduction to the standard directory layout
+Without customization, source code is assumed to be in ${basedir}/src/main/java
and resources are assumed to be in ${basedir}/src/main/resources
.
+Tests are assumed to be in ${basedir}/src/test
, and a project is assumed to produce a JAR file.
+Maven assumes that you want the compile bytecode to ${basedir}/target/classes
and then create a distributable JAR file in ${basedir}/target
For WAR files, the /WEB-INF
directory contains a file named web.xml
which defines the structure of the web application.
+See also Tomcat Deployment guide
mvn archetype:generate -DgroupId=com.dw -DartifactId=es-demo -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
+
mvn archetype:create -DgroupId=org.yourcompany.project -DartifactId=application -DarchetypeArtifactId=maven-archetype-webapp
+
mvn deploy:deploy-file -Dfile=/path/to/jar/file -DrepositoryId=repos-server -Durl=http ://repos.company.o
+
You can run mvn site
and then find an index.html
file in target/site
that contains links to JavaDoc and a few reports about your source code.
Use the search engine at repository.sonatype.org to find dependencies by name and get the xml
necessary to paste into your pom.xml
All Spring beans are managed - they "live" inside a container, called "application context".
+Second, each application has an entry point to that context. Web applications have a Servlet, JSFuses a el-resolver, etc. Also, there is a place where the application context is bootstrapped and all beans - autowired. In web applications this can be a startup listener.
+Autowiring happens by placing an instance of one bean into the desired field in an instance of another bean. Both classes should be beans, i.e. they should be defined to live in the application context.
+What is "living" in the application context? This means that the context instantiates the objects, not you. I.e. - you never make new UserServiceImpl() - the container finds each injection point and sets an instance there.
+Don't forget chmod +x filename
+sudo yum update -y # all packages
+sudo yum install -y package_name
+sudo yum install -y httpd24 php56 mysql55-server php56-mysqlnd
+
# check a service is configured for startup
+sudo chkconfig sshd
+echo $? # 0 = configured for startup
+# or
+sudo chkconfig --list mysqld
+sudo chkconfig --list # all services
+
+# add a service
+sudo chkconfig --add vsftpd
+sudo chkconfig mysqld on
+sudo chkconfig --level 3 httpd on # specific runlevel
+
You can also use a /etc/rc.d/rc.local
script.
User data
field#!/bin/bash
+yum update -y
+yum install -y httpd24 php56 mysql55-server php56-mysqlnd
+service httpd start
+chkconfig httpd on
+groupadd www
+usermod -a -G www ec2-user
+chown -R root:www /var/www
+chmod 2775 /var/www
+find /var/www -type d -exec chmod 2775 {} +
+find /var/www -type f -exec chmod 0664 {} +
+echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
+
cloud-init
File location: /etc/sysconfig/cloudinit
Cloud-init output log file: /var/log/cloud-init-output.log
#!/bin/bash
+cd /tmp
+curl https://amazon-ssm-region.s3.amazonaws.com/latest/linux_amd64/amazon-ssm-agent.rpm -o amazon-ssm-agent.rpm
+yum install -y amazon-ssm-agent.rpm
+
How can I connect to an Amazon EC2 Linux instance with desktop functionality from Windows?
+ + + + + + + + + + + + + + + + + + +Check out the Jekyll docs for more info on how to get the most out of Jekyll. File all bugs/feature requests at Jekyll’s GitHub repo. If you have questions, you can ask them on Jekyll Talk.
+ + +Install Ruby via RubyInstaller
+Update RubyGems
+Bundler is a gem that manages other Ruby gems. It makes sure your gems and gem versions are compatible, and that you have all necessary dependencies each gem requires.
+# Create a new Jekyll site at ./myblog
+~ $ jekyll new myblog
+
+# Change into your new directory
+~ $ cd myblog
+
Jekyll installs a site that uses a gem-based theme called Minima.
+With gem-based themes, some of the site’s directories (such as the assets, _layouts,_includes, and _sass directories) are stored in the theme’s gem, hidden from your immediate view. Yet all of the necessary directories will be read and processed during Jekyll’s build process.
+Now browse to localhost:4000
+ +When you run bundle exec jekyll serve, Bundler uses the gems and versions as specified in Gemfile.lock to ensure your Jekyll site builds with no compatibility or dependency conflicts.
+The Gemfile and Gemfile.lock files inform Bundler about the gem requirements in your site. If your site doesn’t have these Gemfiles, you can omit bundle exec and just run jekyll serve.
+$ jekyll build
+# => The current folder will be generated into ./_site
+
+$ jekyll serve
+# => A development server will run at https://localhost:4000/
+# Auto-regeneration: enabled. Use `--no-watch` to disable.
+
Add to _config.yml
gems:
+ - jekyll-paginate
+ - jekyll-feed
+ - jekyll-sitemap
+``
+
+
+# Custom Search
+
+[Adding a custom Google search](https://digitaldrummerj.me/blogging-on-github-part-7-adding-a-custom-google-search/)
+
+
+# Themes
+
+[Theme documentation](https://jekyllrb.com/docs/themes/)
+
+To change theme, search for jekyll theme on [RubyGems](https://rubygems.org/search?utf8=%E2%9C%93&query=jekyll-theme) to find other gem-based themes.
+
+Add the theme to your site’s Gemfile:
+
+```bash
+gem "jekyll-theme-tactile"
+
Add the following to your site’s _config.yml to activate the theme:
+ +Build your site:
+ +You can find out info about customizing your Jekyll theme, as well as basic Jekyll usage documentation at jekyllrb.com
+You can find the source code for the Jekyll minima theme at: +minima
+You’ll find this post in your _posts
directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run jekyll serve
, which launches a web server and auto-regenerates your site when a file is updated.
To add new posts, simply add a file in the _posts
directory that follows the convention YYYY-MM-DD-name-of-post.ext
and includes the necessary front matter. Take a look at the source for this post to get an idea about how it works.
Jekyll also offers powerful support for code snippets:
+ + + + + + + + + + + + + + + + + + + +GitHub Flavored Markdown Guide
+A paragraph is one or more consecutive lines of text separated by one or more blank lines. A blank line contains nothing but spaces or tabs.
+Do not indent normal paragraphs with spaces or tabs. Indent at least 4 spaces or a tab for code blocks.
+Syntax highlighted code block
+
+# Header 1
+## Header 2
+### Header 3
+
+- Bulleted
+- List
+
+1. Numbered
+2. List
+
+**Bold** and _Italic_ and `Code` text
+
+[Link](url) and ![Image](src)
+
Emphasis can be used in the mi\*dd\*le of a word
.
[Text for the link](URL)
+
+ This is [an example][id] reference-style link.
+ [id]: https://example.com/ "Optional Title Here"
+
+ ![Alt text](/path/to/img.jpg "Optional title")
+
span of code
```python
+
+ def wiki_rocks(text):
+ formatter = lambda t: "funky"+t
+ return formatter(text)
+```
+
will be displayed as
+ +GitHub Pages site will use the layout and styles from the Jekyll theme you have selected in your repository settings. The name of this theme is saved in the Jekyll _config.yml
configuration file.
Bitbucket doesn't support arbitrary HTML in Markdown, it instead uses safe mode. +Safe mode requires that you replace, remove, or escape HTML tags appropriately.
+Code highlighting to bitbucket README.md written in Python Markdown
+:::python + friends = ['john', 'pat', 'gary', 'michael'] + for i, name in enumerate(friends): + print "iteration {iteration} is {name}".format(iteration=i, name=name)
+ +This website is generated by mkdocs.org and the Material Theme.
+mkdocs new [dir-name]
- Create a new project.mkdocs serve
- Start the live-reloading docs server.mkdocs build
- Build the documentation site.mkdocs help
- Print this help message.To install MkDocs / create a new documentation site:
+ +To build the documentation site:
+ +To start the live-reloading docs server - https://localhost:8000/
+ +MkDocs can use the ghp-import tool to commit to the gh-pages branch and push the gh-pages branch to GitHub Pages:
+ +reStructuredText Cheat Sheet (see below)
+All reST files use an indentation of 3 spaces; no tabs are allowed. +The maximum line length is 80 characters for normal text, but tables, +deeply indented code samples and long links may extend beyond that. +Code example bodies should use normal Python 4-space indentation. +Paragraphs are simply chunks of text separated by one or more blank lines. +As in Python, indentation is significant in reST, so all lines of the same +paragraph must be left-aligned to the same level of indentation.
+Section headers are created by underlining (and optionally overlining) +the section title with a punctuation character, at least as long as the text:
+ =================
+ This is a heading
+ =================
+ # with overline, for parts
+ * with overline, for chapters
+ = for sections
+ - for subsections
+ ^ for subsubsections
+ " for paragraphs
+
+ one asterisk: *text* for emphasis (italics),
+ two asterisks: **text** for strong emphasis (boldface), and
+ backquotes: ``text`` for code samples.
+ escape with a backslash \
+
+ * This is a bulleted list.
+ * It has two items, the second
+ item uses two lines.
+
+ 1. This is a numbered list.
+ 2. It has two items too.
+
+ . This is a numbered list.
+ . It has two items too.
+
Nested lists are possible, but be aware that they must be separated from the +parent list items by blank lines
+This is a normal text paragraph. The next paragraph is a code sample::
+
+ It is not processed in any way, except
+ that the indentation is removed.
+
+ It can span multiple lines.
+
+This is a normal text paragraph again.
+
Link text <https://target>
_ for inline web links.
term (up to a line of text)
+ Definition of the term, which must be indented and
+ can even consist of multiple paragraphs
+
+ next term
+ Description.
+
See https://infinitemonkeycorps.net/docs/pph/
+ # Typical function documentation:
+
+ :param volume_id: The ID of the EBS volume to be attached.
+ :type volume_id: str
+
+ :param instance_id: The ID of the EC2 instance
+ :type instance_id: str
+
+ :return: `Reverse geocoder return value`_ dictionary giving closest
+ address(es) to `(lat, lng)`
+ :rtype: dict
+ :raises GoogleMapsError: If the coordinates could not be reverse geocoded.
+
+ Keyword arguments and return value are identical to those of :meth:`geocode()`.
+
+ .. _`Reverse geocoder return value`:
+ https://code.google.com/apis/maps/documentation/geocoding/index.html#ReverseGeocoding
+
:param lat: some text
documents parameters:type lat: float
documents parameter types:return:
dictionary giving some info... documents return values:rtype: dict
documents return type:raises SomeError:
sometext... documents exceptions raised>>>
starts a doctest and is automatically formatted as code::
and a blank linemymethod
,myfunc
, :class:myclass
, and :mod:mymodule
.Google
_.. _Google: https://www.google.com/
An explicit markup block begins with a line starting with .. followed by +whitespace and is terminated by the next paragraph at the same level of +indentation. (There needs to be a blank line between explicit markup +and normal paragraphs.
+ .. sectionauthor:: Guido van Rossum <guido@python.org>
+
+ .. rubric:: Footnotes
+
+ .. [#] Text of the first footnote.
+ .. [#] Text of the second footnote.
+
+
+ :mod:`parrot` -- Dead parrot access
+ ===================================
+
+ .. module:: parrot
+ :platform: Unix, Windows
+ :synopsis: Analyze and reanimate dead parrots.
+ .. moduleauthor:: Eric Cleese <eric@python.invalid>
+ .. moduleauthor:: John Idle <john@python.invalid>
+
+ .. function:: repeat([repeat=3[, number=1000000]])
+ repeat(y, z)
+ :bar: no
+
+ Return a line of text input from the user.
+
+
+ .. class:: Spam
+
+ Description of the class.
+
+ .. data:: ham
+
+ Description of the attribute.
+
:rolename:`content` or :role:`title <target>`
+
+ :meth:`~Queue.Queue.get` will refer to Queue.Queue.get but only display get as the link text.
+
The following roles refer to objects in modules and are possibly hyperlinked +if a matching identifier is found:
+mod
+The name of a module; a dotted name may be used. This should also be used for package names.
+func
+The name of a Python function; dotted names may be used. The role text should not include trailing parentheses to enhance readability. The parentheses are stripped when searching for identifiers.
+data
+The name of a module-level variable or constant.
+const
+The name of a “defined” constant. This may be a C-language #define or a Python variable that is not intended to be changed.
+class
+A class name; a dotted name may be used.
+meth
+The name of a method of an object. The role text should include the type name and the method name. A dotted name may be used.
+attr
+The name of a data attribute of an object.
+exc
+The name of an exception. A dotted name may be used.
+ =====================================================
+ The reStructuredText_ Cheat Sheet: Syntax Reminders
+ =====================================================
+ :Info: See <https://docutils.sf.net/rst.html> for introductory docs.
+ :Author: David Goodger <goodger@python.org>
+ :Date: $Date: 2013-02-20 01:10:53 +0000 (Wed, 20 Feb 2013) $
+ :Revision: $Revision: 7612 $
+ :Description: This is a "docinfo block", or bibliographic field list
+
+ .. NOTE:: If you are reading this as HTML, please read
+ `<cheatsheet.txt>`_ instead to see the input syntax examples!
+
+ Section Structure
+ =================
+ Section titles are underlined or overlined & underlined.
+
+ Body Elements
+ =============
+ Grid table:
+
+ +--------------------------------+-----------------------------------+
+ | Paragraphs are flush-left, | Literal block, preceded by "::":: |
+ | separated by blank lines. | |
+ | | Indented |
+ | Block quotes are indented. | |
+ +--------------------------------+ or:: |
+ | >>> print 'Doctest block' | |
+ | Doctest block | > Quoted |
+ +--------------------------------+-----------------------------------+
+ | | Line blocks preserve line breaks & indents. [new in 0.3.6] |
+ | | Useful for addresses, verse, and adornment-free lists; long |
+ | lines can be wrapped with continuation lines. |
+ +--------------------------------------------------------------------+
+
+ Simple tables:
+
+ ================ ============================================================
+ List Type Examples (syntax in the `text source <cheatsheet.txt>`_)
+ ================ ============================================================
+ Bullet list * items begin with "-", "+", or "*"
+ Enumerated list 1. items use any variation of "1.", "A)", and "(i)"
+ #. also auto-enumerated
+ Definition list Term is flush-left : optional classifier
+ Definition is indented, no blank line between
+ Field list :field name: field body
+ Option list -o at least 2 spaces between option & description
+ ================ ============================================================
+
+ ================ ============================================================
+ Explicit Markup Examples (visible in the `text source`_)
+ ================ ============================================================
+ Footnote .. [1] Manually numbered or [#] auto-numbered
+ (even [#labelled]) or [*] auto-symbol
+ Citation .. [CIT2002] A citation.
+ Hyperlink Target .. _reStructuredText: https://docutils.sf.net/rst.html
+ .. _indirect target: reStructuredText_
+ .. _internal target:
+ Anonymous Target __ https://docutils.sf.net/docs/ref/rst/restructuredtext.html
+ Directive ("::") .. image:: images/biohazard.png
+ Substitution Def .. |substitution| replace:: like an inline directive
+ Comment .. is anything else
+ Empty Comment (".." on a line by itself, with blank lines before & after,
+ used to separate indentation contexts)
+ ================ ============================================================
+
+ Inline Markup
+ =============
+ *emphasis*; **strong emphasis**; `interpreted text`; `interpreted text
+ with role`:emphasis:; ``inline literal text``; standalone hyperlink,
+ https://docutils.sourceforge.net; named reference, reStructuredText_;
+ `anonymous reference`__; footnote reference, [1]_; citation reference,
+ [CIT2002]_; |substitution|; _`inline internal target`.
+
+ Directive Quick Reference
+ =========================
+ See <https://docutils.sf.net/docs/ref/rst/directives.html> for full info.
+
+ ================ ============================================================
+ Directive Name Description (Docutils version added to, in [brackets])
+ ================ ============================================================
+ attention Specific admonition; also "caution", "danger",
+ "error", "hint", "important", "note", "tip", "warning"
+ admonition Generic titled admonition: ``.. admonition:: By The Way``
+ image ``.. image:: picture.png``; many options possible
+ figure Like "image", but with optional caption and legend
+ topic ``.. topic:: Title``; like a mini section
+ sidebar ``.. sidebar:: Title``; like a mini parallel document
+ parsed-literal A literal block with parsed inline markup
+ rubric ``.. rubric:: Informal Heading``
+ epigraph Block quote with class="epigraph"
+ highlights Block quote with class="highlights"
+ pull-quote Block quote with class="pull-quote"
+ compound Compound paragraphs [0.3.6]
+ container Generic block-level container element [0.3.10]
+ table Create a titled table [0.3.1]
+ list-table Create a table from a uniform two-level bullet list [0.3.8]
+ csv-table Create a table from CSV data [0.3.4]
+ contents Generate a table of contents
+ sectnum Automatically number sections, subsections, etc.
+ header, footer Create document decorations [0.3.8]
+ target-notes Create an explicit footnote for each external target
+ math Mathematical notation (input in LaTeX format)
+ meta HTML-specific metadata
+ include Read an external reST file as if it were inline
+ raw Non-reST data passed untouched to the Writer
+ replace Replacement text for substitution definitions
+ unicode Unicode character code conversion for substitution defs
+ date Generates today's date; for substitution defs
+ class Set a "class" attribute on the next element
+ role Create a custom interpreted text role [0.3.2]
+ default-role Set the default interpreted text role [0.3.10]
+ title Set the metadata document title [0.3.10]
+ ================ ============================================================
+
+ Interpreted Text Role Quick Reference
+ =====================================
+ See <https://docutils.sf.net/docs/ref/rst/roles.html> for full info.
+
+ ================ ============================================================
+ Role Name Description
+ ================ ============================================================
+ emphasis Equivalent to *emphasis*
+ literal Equivalent to ``literal`` but processes backslash escapes
+ math Mathematical notation (input in LaTeX format)
+ PEP Reference to a numbered Python Enhancement Proposal
+ RFC Reference to a numbered Internet Request For Comments
+ raw For non-reST data; cannot be used directly (see docs) [0.3.6]
+ strong Equivalent to **strong**
+ sub Subscript
+ sup Superscript
+ title Title reference (book, etc.); standard default role
+ ================ ============================================================
+
Building beautiful REST APIs using Flask, Swagger UI and Flask-RESTPlus
+ + + + + + + + + + + + + + + + + + +? # overall help
+help # python help system
+?someobj or someobj? # help
+??someobj or someobj?? # detailed help
+
%pdoc
%pdef
%psource
for docstring, function definition, source code only.
To run a program directly from the IPython console:
+ +%run
has special flags for timing the execution of your scripts (-t
) or for running them under the control of either Python's pdb debugger (-d
) or profiler (-p
):
%edit %ed # edit then execute
+%save
+%load example.py # load local (example) file (or url) allowing modification
+%load https://matplotlib.org/plot_directive/mpl_examples/mplot3d/contour3d_demo.py
+%macro # define macro with range of history lines, filenames or string objects
+%recall
+
+%whos # list identifiers you have defined interactively
+%reset -f -s # remove objects -f for force -s for soft (leaves history).
+
%reset
is not a kernel restartCtrl+.
in "qtconsole"import module ; reload(module)
to reload a module from disk%debug # jump into the Python debugger (pdb)
+%pdb # start the debugger on any uncaught exception.
+
+%cd # change directory
+%pwd # print working directory
+%env # OS environment variables
+
_ __ ___ # etc... for previous outputs.
+_i _ii _i4 # etc.. for previous input. _ih for list of previous inputs
+
Start with ipython --gui=qt
or at the IPython prompt:
Arguments can be wx
, qt
, gtk
and tk
.
Start with: ipython --matplotlib
( or --matplotlib=qt
etc...)
At the IPython prompt:
+%matplotlib # set matplotlib to work interactively; does not import anythig
+%matplotlib inline
+%matplotlib qt # request a specific GUI backend
+%pylab inline
+
%pylab
makes the following imports:
import numpy
+import matplotlib
+from matplotlib import pylab, mlab, pyplot
+np = numpy
+plt = pyplot
+from IPython.display import display
+from IPython.core.pylabtools import figsize, getfigs
+from pylab import *
+from numpy import *
+
At the command prompt:
+ +alternative: --matplotlib inline +or within IPython:
+ +To embed plots, SVG or HTML in qtconsole, call display:
+from IPython.core.display import display, display_html
+from IPython.core.display import display_png, display_svg
+display(plt.gcf()) # embeds the current figure in the qtconsole
+display(*getfigs()) # embeds all active figures in the qtconsole
+#or:
+f = plt.figure()
+plt.plot(np.rand(100))
+display(f)
+
ipython and ipython notebook for matlab users
+Enter
to edit a cellShift + Enter
to evaluateCtrl + m
or Esc
for the "command mode"In command mode:
+ h list of keyboard shortcuts
+ 1-6 to convert to heading cell
+ m to convert to markdown cell
+ y to convert to code
+ c copy / v paste
+ d d delete cell
+ s save notebook
+ . to restart kernel
+
Papermill is a tool for parameterizing and executing Jupyter Notebooks.
+ + + + + + + + + + + + + + + + + + + +Matplotlib prepares 2D (and some 3D) graphics.
+import numpy as np
+import matplotlib.pyplot as plt
+
+# Compute the x and y coordinates for points on sine and cosine curves
+x = np.arange(0, 3 * np.pi, 0.1)
+y_sin = np.sin(x)
+y_cos = np.cos(x)
+
+# Set up a subplot grid that has height 2 and width 1,
+# and set the first such subplot as active.
+plt.subplot(2, 1, 1)
+
+# Make the first plot
+plt.plot(x, y_sin)
+plt.title('Sine')
+
+# Set the second subplot as active, and make the second plot.
+plt.subplot(2, 1, 2)
+plt.plot(x, y_cos)
+plt.title('Cosine')
+
+# Show the figure.
+plt.show()
+
venv — Creation of virtual environments
+ + +virtualenv is a tool to create isolated Python environments. Since Python 3.3, a subset of it has been integrated into the standard library under the venv module. Note though, that the venv module does not offer all features of this library (e.g. cannot create bootstrap scripts, cannot create virtual environments for other python versions than the host python, not relocatable, etc.).
+ +A set of command line tools to help you keep your pip-based packages fresh, even when you've pinned them.
+ +The problems that Pipenv seeks to solve are multi-faceted:
+Pyenv for managing multiple Python versions
+Code coverage measurement for Python
+mechanize - Automate interaction with HTTP web servers
+Cookiecutter template for a Python package
+ + +Twine is a utility for publishing Python packages on PyPI.
+Buildout, an automation tool written in and extended with Python
+ +Uranium: a Python Build System
+Host, run, and code Python in the cloud
+ + + + + + + + + + + + + + + + + + +x = 0
+def outer():
+ x = 1
+ def inner():
+ nonlocal x
+ x = 2
+ print("inner:", x)
+
+ inner()
+ print("outer:", x)
+
+outer()
+print("global:", x)
+
+# inner: 2
+# outer: 2
+# global: 0
+
+## with global
+x = 0
+def outer():
+ x = 1
+ def inner():
+ global x
+ x = 2
+ print("inner:", x)
+ inner()
+ print("outer:", x)
+outer()
+print("global:", x)
+
+# inner: 2
+# outer: 1
+# global: 2
+
name="David"
+f"My name is {name}"
+value = decimal.Decimal("10.4507")
+print(f"result: {value:10.5}" ) # width precision
+
yield from iterator
is equivalent
+ +Example:
+def lazy_range(up_to):
+ """Generator to return the sequence of integers from 0 to up_to, exclusive."""
+ index = 0
+ def gratuitous_refactor():
+ nonlocal index
+ while index < up_to:
+ yield index
+ index += 1
+ yield from gratuitous_refactor()
+
New 3.6 syntax:
+async def func(param1, param2):
+ do_stuff()
+ await some_coroutine()
+
+async def read_data(db):
+ data = await db.fetch('SELECT ...')
+
+async def display_date(loop):
+ end_time = loop.time() + 5.0
+ while True:
+ print(datetime.datetime.now())
+ if (loop.time() + 1.0) >= end_time:
+ break
+ await asyncio.sleep(1)
+
+
+loop = asyncio.get_event_loop()# Blocking call which returns when the display_date() coroutine is done
+loop.run_until_complete(display_date(loop))
+loop.close()
+
{i async for i in agen()}
[i async for i in agen()]
{i: i ** 2 async for i in agen()}
(i ** 2 async for i in agen())
Other common typings include: Any; Generic, Dict, List, Optional, Mapping, Set, Sequence - expressed as Sequence[int]
+T = TypeVar('T', int, float, complex) # T is either or an int, a float or a complex
+Vector = Iterable[Tuple[T, T]] #
+
+def inproduct(v: Vector[T]) -> T:
+ return sum(x*y for x, y in v)
+
+def dilate(v: Vector[T], scale: T) -> Vector[T]:
+ return ((x * scale, y * scale) for x, y in v)
+vec = [] # type: Vector[float]
+
Actors have:
+sbt:
+libraryDependencies ++= Seq(
+ "com.typesafe.akka" %% "akka-actor" % "2.5.6",
+ "com.typesafe.akka" %% "akka-testkit" % "2.5.6" % Test
+)
+
//#full-example
+package com.lightbend.akka.sample
+
+import akka.actor.{ Actor, ActorLogging, ActorRef, ActorSystem, Props }
+import scala.io.StdIn
+
+//#greeter-companion
+//#greeter-messages
+object Greeter {
+ //#greeter-messages
+ def props(message: String, printerActor: ActorRef): Props = Props(new Greeter(message, printerActor))
+ //#greeter-messages
+ final case class WhoToGreet(who: String)
+ case object Greet
+}
+//#greeter-messages
+//#greeter-companion
+
+//#greeter-actor
+class Greeter(message: String, printerActor: ActorRef) extends Actor {
+ import Greeter._
+ import Printer._
+
+ var greeting = ""
+
+ def receive = {
+ case WhoToGreet(who) =>
+ greeting = s"$message, $who"
+ case Greet =>
+ //#greeter-send-message
+ printerActor ! Greeting(greeting)
+ //#greeter-send-message
+ }
+}
+//#greeter-actor
+
+//#printer-companion
+//#printer-messages
+object Printer {
+ //#printer-messages
+ def props: Props = Props[Printer]
+ //#printer-messages
+ final case class Greeting(greeting: String)
+}
+//#printer-messages
+//#printer-companion
+
+//#printer-actor
+class Printer extends Actor with ActorLogging {
+ import Printer._
+
+ def receive = {
+ case Greeting(greeting) =>
+ log.info(s"Greeting received (from ${sender()}): $greeting")
+ }
+}
+//#printer-actor
+
+//#main-class
+object AkkaQuickstart extends App {
+ import Greeter._
+
+ // Create the 'helloAkka' actor system
+ val system: ActorSystem = ActorSystem("helloAkka")
+
+ try {
+ //#create-actors
+ // Create the printer actor
+ val printer: ActorRef = system.actorOf(Printer.props, "printerActor")
+
+ // Create the 'greeter' actors
+ val howdyGreeter: ActorRef =
+ system.actorOf(Greeter.props("Howdy", printer), "howdyGreeter")
+ val helloGreeter: ActorRef =
+ system.actorOf(Greeter.props("Hello", printer), "helloGreeter")
+ val goodDayGreeter: ActorRef =
+ system.actorOf(Greeter.props("Good day", printer), "goodDayGreeter")
+ //#create-actors
+
+ //#main-send-messages
+ howdyGreeter ! WhoToGreet("Akka")
+ howdyGreeter ! Greet
+
+ howdyGreeter ! WhoToGreet("Lightbend")
+ howdyGreeter ! Greet
+
+ helloGreeter ! WhoToGreet("Scala")
+ helloGreeter ! Greet
+
+ goodDayGreeter ! WhoToGreet("Play")
+ goodDayGreeter ! Greet
+ //#main-send-messages
+
+ println(">>> Press ENTER to exit <<<")
+ StdIn.readLine()
+ } finally {
+ system.terminate()
+ }
+}
+//#main-class
+//#full-example
+
The layout of a Play application is standardized to keep things as simple as possible. After a first successful compile, a Play application looks like this:
+app → Application sources
+ └ assets → Compiled asset sources
+ └ stylesheets → Typically LESS CSS sources
+ └ javascripts → Typically CoffeeScript sources
+ └ controllers → Application controllers
+ └ models → Application business layer
+ └ views → Templates
+build.sbt → Application build script
+conf → Configurations files and other non-compiled resources (on classpath)
+ └ application.conf → Main configuration file
+ └ routes → Routes definition
+dist → Arbitrary files to be included in your projects distribution
+public → Public assets
+ └ stylesheets → CSS files
+ └ javascripts → Javascript files
+ └ images → Image files
+project → sbt configuration files
+ └ build.properties → Marker for sbt project
+ └ plugins.sbt → sbt plugins including the declaration for Play itself
+lib → Unmanaged libraries dependencies
+logs → Logs folder
+ └ application.log → Default log file
+target → Generated stuff
+ └ resolution-cache → Info about dependencies
+ └ scala-2.11
+ └ api → Generated API docs
+ └ classes → Compiled class files
+ └ routes → Sources generated from routes
+ └ twirl → Sources generated from templates
+ └ universal → Application packaging
+ └ web → Compiled web assets
+test → source folder for unit or functional tests
+
Examples from Scala Koans.
+The scala package contains core types like Int, Float, Array or Option which are accessible in all Scala compilation units without explicit qualification or imports.
+Notable packages include:
+ scala.collection and its sub-packages contain Scala's collections framework
+ scala.collection.immutable - Immutable, sequential data-structures such as Vector, List, Range, HashMap or HashSet
+ scala.collection.mutable - Mutable, sequential data-structures such as ArrayBuffer, StringBuilder, HashMap or HashSet
+ scala.collection.concurrent - Mutable, concurrent data-structures such as TrieMap
+ scala.collection.parallel.immutable - Immutable, parallel data-structures such as ParVector, ParRange, ParHashMap or ParHashSet
+ scala.collection.parallel.mutable - Mutable, parallel data-structures such as ParArray, ParHashMap, ParTrieMap or ParHashSet
+
+ scala.concurrent - Primitives for concurrent programming such as Futures and Promises
+ scala.io - Input and output operations
+ scala.math - Basic math functions and additional numeric types like BigInt and BigDecimal
+ scala.sys - Interaction with other processes and the operating system
+ scala.util.matching - Regular expressions
+
Additional parts of the standard library are shipped as separate libraries. These include:
+ scala.reflect - Scala's reflection API (scala-reflect.jar)
+ scala.xml - XML parsing, manipulation, and serialization (scala-xml.jar)
+ scala.swing - A convenient wrapper around Java's GUI framework called Swing (scala-swing.jar)
+ scala.util.parsing - Parser combinators (scala-parser-combinators.jar)
+ Automatic imports
+
Identifiers in the scala package and the scala.Predef object are always in scope by default.
+Some of these identifiers are type aliases provided as shortcuts to commonly used classes. For example, List is an alias for scala.collection.immutable.List.
+Other aliases refer to classes provided by the underlying platform. For example, on the JVM, String is an alias for java.lang.String.
+Traversables are the superclass of Lists, Arrays, Maps, Sets, Streams, and more. +The methods involved can be applied to each other in a different type.
+val set = Set(1, 9, 10, 22)
+val list = List(3, 4, 5, 10)
+val result = set ++ list // ++ appends two Traversables together.
+result.size
+result.isEmpty
+result.hasDefiniteSize // false if a Stream
+
list.head
+list.headOption
+list.tail
+list.lastOption
+result.last
+list.init // collection without the last element
+list.slice(1, 3)
+list.take(3)
+list drop 6 take 3
+list.takeWhile(_ < 100)
+list.dropWhile(_ < 100)
+
list.filter(_ < 100)
+list.filterNot(_ < 100)
+list.find(_ % 2 != 0) // get first element that matches
+
+list.foreach(num => println(num * 4)) // side effect
+
+list.map(_ * 4) // map
+
+val list = List(List(1), List(2, 3, 4), List(5, 6, 7), List(8, 9, 10))
+list.flatten
+list.flatMap(_.map(_ * 4)) // map then flatten
+
+val result = list.collect { // apply a partial function to all elements of a Traversable and will return a different collection.
+ case x: Int if (x % 2 == 0) => x * 3
+ }
+// can use orElse or andThen
+
val array = Array(87, 44, 5, 4, 200, 10, 39, 100) // splitAt - will split a Traversable at a position, returning a tuple.
+val result = array splitAt 3
+result._1
+result._2
+
+val result = array partition (_ < 100) // partition will split a Traversable according to predicate, return a 2 product Tuple. The left side are the elements satisfied by the predicate, the right side is not.
+
+// groupBy returns a map e.g. Map( "Odd" -> ... , "Even" -> ...)
+val result = array groupBy { case x: Int if x % 2 == 0 => "Even"; case x: Int if x % 2 != 0 => "Odd" }
+
list forall (_ < 100) // true if predicate true for all elements
+list exists (_ < 100) // true if predicate true for any element
+list count (_ < 100)
+
list.foldLeft(0)(_ - _)
+(0 /: list)(_ - _) // Short hand
+
+list.foldRight(0)(_ - _)
+(list :\ 0)(_ - _) // Short hand
+
+list.reduceLeft { _ + _ }
+list.reduceRight { _ + _ }
+
+list.sum
+list.product
+list.max
+list.min
+
+val list = List(List(1, 2, 3), List(4, 5, 6), List(7, 8, 9))
+list.transpose
+
list.toArray
+list.toSet
+set.toList
+set.toIterable
+set.toSeq
+set.toIndexedSeq
+list.toStream
+
+val list = List("Phoenix" -> "Arizona", "Austin" -> "Texas") // elements should be tuples
+val result = list.toMap
+
result.mkString(",")
+list.mkString(">", ",", "<")
+val list = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
+stringBuilder.append("I want all numbers 6-12: ")
+list.filter(it => it > 5 && it < 13).addString(stringBuilder, ",")
+stringBuilder.mkString
+
val a = List(1, 2, 3) // immutable
+val b = 1 :: 2 :: 3 :: Nil // cons notation
+(a == b) // true
+a eq b // false
+a.length
+a.head
+a.tail
+a.reverse // reverse the list
+a.map {v => v * 2} // or a.map {_ * 2} or a.map(_ * 2)
+a.filter {v => v % 3 == 0}
+a.filterNot(v => v == 5) // remove where value is 5
+a.reduceLeft(_ + _) // note the two _s below indicate the first and second args respectively
+a.foldLeft(10)(_ + _) // foldlLeft is like reduce, but with an explicit starting value
+(1 to 5).toList // from range
+val a = a.toArray
+
Nil lists are identical, even of different types
+val list = List(3, 5, 9, 11, 15, 19, 21)
+val it = list.iterator
+if (it.hasNext) {
+ it.next should be(__)
+}
+
+val it = list grouped 3 // `grouped` will return an fixed sized Iterable chucks of an Iterable
+val it = list sliding 3 // `sliding` will return an Iterable that shows a sliding window of an Iterable.
+val it = list sliding(3, 3) // `sliding` can take the size of the window as well the size of the step during each iteration
+list takeRight 3
+list dropRight 3
+
+val xs = List(3, 5, 9) // `zip` will stitch two iterables into an iterable of pairs (tuples) of corresponding elements from both iterables.
+val ys = List("Bob", "Ann", "Stella")
+xs zip ys
+
+// If two Iterables aren't the same size, then `zip` will only zip what can only be paired.
+xs zipAll(ys, -1, "?") // if two Iterables aren't the same size, then `zipAll` can provide fillers
+
+xs.zipWithIndex
+
val s = Seq("hello", "to", "you")
+val filtered = s.filter(_.length > 2)
+val r = s map {
+ _.reverse
+ }
+val s = for (v <- 1 to 10 if v % 3 == 0) yield v // create a sequence from a for comprehension with an optional condition
+s.toList
+
val strictList = List(10, 20, 30)
+val lazyList = strictList.view // Strict collection always processes its elements but lazy collection does it on demand
+
+val infinite = Stream.from(1)
+infinite.take(4).sum
+Stream.continually(1).take(4).sum
+
+// Always remember tail of a lazy collection is never computed unless required
+
+def makeLazy(value: Int): Stream[Int] = {
+ Stream.cons(value, makeLazy(value + 1))
+}
+val stream = makeLazy(1)
+stream.head
+
val myMap = Map("MI" -> "Michigan", "OH" -> "Ohio", "WI" -> "Wisconsin", "MI" -> "Michigan")
+
+// access by key - Accessing a map by key results in an exception if key is not found
+myMap("MI")
+myMap.contains("IL")
+
+val aNewMap = myMap + ("IL" -> "Illinois") // add - creates a new collection if immutable
+val aNewMap = myMap - "MI" // remove - Attempted removal of nonexistent elements from a map is handled gracefully
+val aNewMap = myMap -- List("MI", "OH") // remove multiples
+val aNewMap = myMap - ("MI", "WI") // Notice: single '-' operator for tuples
+
+var anotherMap += ("IL" -> "Illinois") // compiler trick - creates a new collection and reassigns; note the 'var'
+
+// Map values can be iterated
+val mapValues = myMap.values
+mapValues.size
+mapValues.head
+mapValues.last
+
+for (mval <- mapValues) println(mval)
+// NOTE that the following will not compile, as iterators do not implement "contains"
+//mapValues.contains("Illinois")
+
+// Map keys may be of mixed type
+val myMap = Map("Ann Arbor" -> "MI", 49931 -> "MI")
+
+// Mixed type values can be added to a map
+val myMap = scala.collection.mutable.Map.empty[String, Any]
+myMap("Ann Arbor") = (48103, 48104, 48108)
+myMap("Houghton") = 49931
+
+// Map equivalency is independent of order
+val myMap1 = Map("MI" -> "Michigan", "OH" -> "Ohio", "WI" -> "Wisconsin", "IA" -> "Iowa")
+val myMap2 = Map("WI" -> "Wisconsin", "MI" -> "Michigan", "IA" -> "Iowa", "OH" -> "Ohio")
+myMap1.equals(myMap2)
+
Maps insertion with duplicate key updates previous entry with subsequent value
+val myMap = mutable.Map("MI" -> "Michigan", "OH" -> "Ohio", "WI" -> "Wisconsin", "IA" -> "Iowa")
+// same methods than immutable maps work
+val myMap += ("IL" -> "Illinois") // this is a method; note the difference from immutable +=
+myMap.clear() // Convention is to use parens if possible when method called changes state
+
val mySet = Set(1, 3, 4, 9) // immutable
+val mySet = mutable.Set("Michigan", "Ohio", "Wisconsin", "Iowa")
+mySet.size
+mySet contains "Ohio"
+mySet += "Oregon"
+mySet += ("Iowa", "Ohio")
+mySet ++= List("Iowa", "Ohio")
+mySet -= "Ohio"
+mySet --= List("Iowa", "Ohio")
+mySet.clear() // mutable only
+
+var sum = 0
+for (i <- mySet) // for comprehension
+ sum = sum + i // of course this is the same thing as mySet.reduce(_ + _)
+
+val mySet2 = Set("Wisconsin", "Michigan", "Minnesota")
+mySet intersect mySet2 // or & operator
+mySet1 union mySet2 // or | operator
+mySet2 subsetOf mySet1
+mySet1 diff mySet2
+mySet1.equals(mySet2) // independent of order
+
val someValue: Option[String] = Some("I am wrapped in something")
+val nullValue: Option[String] = None
+someValue.get // java.util.NoSuchElementException if None
+nullValue getOrElse "No value"
+nullValue.isEmpty
+
+val value = someValue match { // pattern matching
+ case Some(v) => v
+ case None => 0.0
+ }
+
Some(10) filter { _ == 10}
+ Some(Some(10)) flatMap { _ map { _ + 10}}
+ var newValue1 = 0
+ Some(20) foreach { newValue1 = _}
+
val list = List(1, 2, 3, 4, 5)
+val result = list.flatMap(it => if (it % 2 == 0) Some(it) else None)
+
val values = List(Some(10), Some(20), None, Some(15))
+ val newValues = for {
+ someValue <- values
+ value <- someValue
+ } yield value
+
Scala can implicitly convert from a Scala collection type into a Java collection type.
+ + + + + + + + + + + + + + + + + + + +Use the “sbt new” command, providing the name of the template. For example, “$ sbt new akka/hello-akka.g8”. +You can find a list of templates here.
+Or download from Scala Project Templates
+trait Animal
+class Bird extends Animal
+class Mammal extends Animal
+class Fish extends Animal
+
+object Animal {
+ def apply(animal: String): Animal = animal.toLowerCase match {
+ case "bird" => new Bird
+ case "mammal" => new Mammal
+ case "fish" => new Fish
+ case x: String => throw new RuntimeException(s"Unknown animal: $x")
+ }
+}
+
// A has a B and C
+case class A(b: B, c: C)
+
+// A is a B or C
+sealed trait A
+case class B() extends A
+case class C() extends A
+
They have only data and do not contain any functionality on top of this data as normal classes would.
+sealed trait Shape
+case class Circle(radius: Double) extends Shape
+case class Rectangle(height: Double, width: Double) extends Shape
+
+object Shape {
+ def area(shape: Shape): Double =
+ shape match {
+ case Circle(Point(x, y), radius) => Math.PI * Math.pow(radius, 2) // use pattern matching to process
+ case Rectangle(_, h, w) => h * w
+ }
+}
+
abstract class StringWriter {
+ def write(data: String): String
+}
+
+class BasicStringWriter extends StringWriter {
+ override def write(data: String): String =
+ s"Writing the following data: ${data}"
+}
+
+trait CapitalizingStringWriter extends StringWriter {
+ abstract override def write(data: String): String = {
+ super.write(data.split("\\s+").map(_.capitalize).mkString(" "))
+ }
+}
+
+trait UppercasingStringWriter extends StringWriter {
+ abstract override def write(data: String): String = {
+ super.write(data.toUpperCase)
+ }
+}
+
+object Example {
+ def main(args: Array[String]): Unit = {
+ val writer1 = new BasicStringWriter with UppercasingStringWriter with CapitalizingStringWriter
+ System.out.println(s"Writer 1: '${writer1.write("we like learning scala!")}'")
+ }
+}
+
Stackable traits order of execution
+Stackable traits are always executed from the right mixin to the left. +Sometimes, however, if we only get output and it doesn't depend on what is passed to the method, we simply end up with method calls on a stack, which then get evaluated and it will appear as if things are applied from left to right.
+https://jonasboner.com/real-world-scala-dependency-injection-di/
+// Service Interfaces and Component Definitions
+
+trait OnOffDeviceComponent {
+ val onOff: OnOffDevice // abstract val
+
+ trait OnOffDevice {
+ def on: Unit
+ def off: Unit
+ }
+}
+
+trait SensorDeviceComponent {
+ val sensor: SensorDevice
+
+ trait SensorDevice {
+ def isCoffeePresent: Boolean
+ }
+}
+
+// =======================
+// Component / Service Implementations
+
+trait OnOffDeviceComponentImpl extends OnOffDeviceComponent {
+ class Heater extends OnOffDevice {
+ def on = println("heater.on")
+ def off = println("heater.off")
+ }
+}
+
+trait SensorDeviceComponentImpl extends SensorDeviceComponent {
+ class PotSensor extends SensorDevice {
+ def isCoffeePresent = true
+ }
+}
+
+// =======================
+// Component declaring two dependencies that it wants injected
+trait WarmerComponentImpl {
+ this: SensorDeviceComponent with OnOffDeviceComponent => // Use of self-type for composition
+ class Warmer {
+ def trigger = {
+ if (sensor.isCoffeePresent) onOff.on
+ else onOff.off
+ }
+ }
+}
+
+// =======================
+// Instantiation (and configuration) of the services in the ComponentRegistry module
+
+object ComponentRegistry extends
+ OnOffDeviceComponentImpl with
+ SensorDeviceComponentImpl with
+ WarmerComponentImpl {
+
+ val onOff = new Heater // all instantiations in one spot; can be easily be replaced by e.g. mocks
+ val sensor = new PotSensor
+ val warmer = new Warmer
+}
+
+// =======================
+val warmer = ComponentRegistry.warmer
+warmer.trigger
+
// Define some behavior in terms of operations that a type must support in order to be considered a member of the type class.
+trait Number[T] {
+ def plus(x: T, y: T): T
+ def divide(x: T, y: Int): T
+}
+
+// Define the default type class members in the companion object of the trait
+object Number {
+
+ implicit object DoubleNumber extends Number[Double] { // note the implicit
+ override def plus(x: Double, y: Double): Double = x + y
+ override def divide(x: Double, y: Int): Double = x / y
+ }
+}
+
+object Stats {
+
+// older pattern with implicit parameter
+// def mean[T](xs: Vector[T])(implicit ev: Number[T]): T = // note the implicit
+// ev.divide(xs.reduce(ev.plus(_, _)), xs.size)
+
+ def mean[T: Number](xs: Vector[T]): T = // note the context bound
+ implicitly[Number[T]].divide(
+ xs.reduce(implicitly[Number[T]].plus(_, _)), // retrieve the evidence via implicitly[]
+ xs.size
+ )
+}
+
abstract class Element(text: String) {
+ def accept(visitor: Visitor)
+}
+
+case class Title(text: String) extends Element(text) {
+ override def accept(visitor: Visitor): Unit = {
+ visitor.visit(this)
+ }
+}
+
+case class Text(text: String) extends Element(text) {
+ override def accept(visitor: Visitor): Unit = {
+ visitor.visit(this)
+ }
+}
+
+class Document(parts: List[Element]) {
+ def accept(visitor: Visitor): Unit = {
+ parts.foreach(p => p.accept(visitor))
+ }
+}
+
+trait Visitor {
+ def visit(element: Element)
+}
+
+class VisitorImpl1 extends Visitor {
+ override def visit(element: Element): Unit = {
+ element match {
+ case Title(text) => ???
+ case Text(text) => ???
+ //...
+ }
+ }
+}
+
import com.typesafe.config.ConfigFactory
+
+trait AppConfigComponent {
+
+ val appConfigService: AppConfigService
+
+ class AppConfigService() {
+ //-Dconfig.resource=production.conf for overriding
+ private val conf = ConfigFactory.load()
+ private val appConf = conf.getConfig("job-scheduler")
+ private val db = appConf.getConfig("db")
+
+ val configPath = appConf.getString("config-path")
+ val configExtension = appConf.getString("config-extension")
+ val workers = appConf.getInt("workers")
+
+ val dbConnectionString = db.getString("connection-string")
+ val dbUsername = db.getString("username")
+ val dbPassword = db.getString("password")
+ }
+}
+
import scala.collection.mutable.Map
+
+trait Memoizer {
+
+ def memo[X, Y](f: X => Y): (X => Y) = {
+ val cache = Map[X, Y]()
+ (x: X) => cache.getOrElseUpdate(x, f(x))
+ }
+}
+
Using scalaz:
+ +The pimp my library design pattern is really similar to extension methods in C#.
+ + + + + + + + + + + + + + + + + + + +REPL https://ammonite.io/
+ + + +https://www.scala-lang.org/download/
+Some examples are derived from Scala Koans.
+Class Names - For all class names, the first letter should be in Upper Case. If several words are used to form a name of the class, each inner word's first letter should be in Upper Case.
+class MyFirstScalaClass
Method Names - All method names should start with a Lower Case letter. If multiple words are used to form the name of the method, then each inner word's first letter should be in Upper Case.
+def myMethodName()
Program File Name - Name of the program file should exactly match the object name. When saving the file you should save it using the object name (Remember Scala is case-sensitive) and append ".scala" to the end of the name. If the file name and the object name do not match your program will not compile.
+Assume 'HelloWorld' is the object name: the file should be saved as 'HelloWorld.scala'.
+import scala.collection._ // wildcard import. When importing all the names of a package or class, one uses the underscore character (_) instead of the asterisk (*).
+import scala.collection.Vector // one class import
+import scala.collection.{Vector, Sequence} // selective import. Multiple classes can be imported from the same package by enclosing them in curly braces
+import scala.collection.{Vector => Vec28} // renaming import.
+import java.util.{Date => _, _} // import all from java.util except Date.
+
All classes from the java.lang package are imported by default. The Predef object provides definitions that are accessible in all Scala compilation units without explicit qualification:
+import scala.collection.mutable.HashMap // Mutable collections must be imported.
+import scala.collection.immutable.{TreeMap, TreeSet} // So are specialized collections.
+
You can combine expressions by surrounding them with {}. We call this a block. +The result of the last expression in the block is the result of the overall block, too.
+ +var x = 5 // variable
+val x = 5 // immutable value / "const"
+var x: Double = 5 // explicit type
+println(x)
+
A lazy val is assignment that will not evaluated until it is called. Note there is no lazy var
+ +val a = 2 // int
+val b = 31L // long
+val c = 0x30B // hexadecimal
+val d = 3f // float
+val e = 3.22d // double
+val f = 93e-9
+val g = 'a' // character
+val h = '\u0061' // unicode for a
+val i = '\141' // octal for a
+val j = '\"' // escape sequences
+val k = '\\'
+val s = "To be or not to be" // string
+s.charAt(0)
+val s2 = """An apple a day
+keeps the doctor away""" // multi-lines string
+s2.split('\n')
+val s3 = """An apple a day
+ |keeps the doctor away""" // Multiline String literals can use | to specify the starting position of subsequent lines, then use stripMargin to remove the surplus indentation.
+s3.stripMargin
+
object Planets extends Enumeration {
+ val Mercury = Value
+ val Venus = Value
+ val Earth = Value
+ val Mars = Value
+ val Jupiter = Value
+ val Saturn = Value
+ val Uranus = Value
+ val Neptune = Value
+ val Pluto = Value
+}
+
+Planets.Mercury.id
+Planets.Mercury.toString //How does it get the name? by Reflection.
+
+object GreekPlanets extends Enumeration {
+ val Mercury = Value(1, "Hermes") // enumeration with your own index and/or your own Strings
+ val Venus = Value(2, "Aphrodite")
+ //Fun Fact: Tellus is Roman for (Mother) Earth
+ val Earth = Value(3, "Gaia")
+ val Mars = Value(4, "Ares")
+ val Jupiter = Value(5, "Zeus")
+ val Saturn = Value(6, "Cronus")
+ val Uranus = Value(7, "Ouranus")
+ val Neptune = Value(8, "Poseidon")
+ val Pluto = Value(9, "Hades")
+}
+
(1,2,3) // tuple literal. (Tuple3)
+var (x,y,z) = (1,2,3) // destructuring bind: tuple unpacking via pattern matching.
+// BAD var x,y,z = (1,2,3) // hidden error: each assigned to the entire tuple.
+
+val tuple = ("apple", 3) // mixed type tuple
+tuple._1
+tuple._2
+tuple.swap
+
var xs = List(1,2,3) // list (immutable).
+xs(2) // paren indexing
+1 :: List(2,3) // cons (create a new list by prepending the element).
+
+1 to 5 // Range sugar. Same as `1 until 6`
+1 to 10 by 2
+Range(1, 10, 2) // Range does not include the last item, even in a step increment
+Range(1, 9, 2).inclusive
+
if (check) happy else sad // conditional.
+if (check) happy //
+if (check) happy else () // same as above
+while (x < 5) { println(x); x += 1} // while loop.
+do { println(x); x += 1} while (x < 5) // do while loop.
+
+for (x <- xs if x%2 == 0) yield x*10 // for comprehension with guard
+xs.filter(_%2 == 0).map(_*10) // same as filter/map
+for ((x,y) <- xs zip ys) yield x*y // for comprehension: destructuring bind
+(xs zip ys) map { case (x,y) => x*y } // same as
+for (x <- xs; y <- ys) yield x*y // for comprehension: cross product. Later generators varying more rapidly than earlier ones
+xs flatMap {x => ys map {y => x*y}} // same as
+for (x <- xs; y <- ys) {
+ println("%d/%d = %.1f".format(x, y, x/y.toFloat)) // for comprehension: imperative-ish
+}
+for (i <- 1 to 5) { // for comprehension: iterate including the upper bound
+ println(i)
+}
+for (i <- 1 until 5) { // for comprehension: iterate omitting the upper bound
+ println(i)
+}
+
+import scala.util.control.Breaks._ // break
+breakable {
+ for (x <- xs) {
+ if (Math.random < 0.1) break
+ }
+}
+
val helloMessage = "Hello World"
+s"Application $helloMessage" // string interpolation; can include expressions which can include numbers and strings
+// use `f` prefix before the string instead of an `s` for sprintf formatting
+
Scala is a functional language in the sense that every function is a value and every value is an object so ultimately every function is an object. +Scala provides a lightweight syntax for defining anonymous functions, it supports higher-order functions, it allows functions to be nested, and supports currying.
+def add(x: Int, y: Int): Int = x + y // the return type is declared after the parameter list and a colon
+
+// GOOD def f(x: Any) = println(x)
+// BAD def f(x) = println(x) // syntax error: need types for every arg.
+
+def f(x: Int) = { // inferred return type
+ val square = x*x
+ square.toString
+ } // The last expression in the body is the method’s return value. (Scala does have a return keyword, but it’s rarely used.)
+
+// BAD def f(x: Int) { x*x } hidden error: without = it’s a Unit-returning procedure; causes havoc
+
+// When performing recursion, the return type on the method is mandatory!
+
def `put employee on probation`(employee: Employee) = {
+ new Employee(employee.`first name`, employee.`last name`, "Probation")
+ }
+
def addThenMultiply(x: Int, y: Int)(multiplier: Int): Int = (x + y) * multiplier
+def name: String = System.getProperty("name")
+
def foo(x: Int) { //Note: No `=`; returns Unit
+ print(x.toString)
+ }
+def foo(x: Int): Unit = print(x.toString) // or
+
Convention (not required for the compiler) states that if you a call a method that returns a Unit / has a side effect, invoke that method with empty parenthesis, other leave the parenthesis out
+ +def addColorsWithDefaults(red: Int = 0, green: Int = 0, blue: Int = 0) = {
+ (red, green, blue)
+}
+
+me.addColors(blue = 40)
+
def sum(args: Int*) = args.reduceLeft(_+_) // varargs. must be last arg
+
+def capitalizeAll(args: String*) = {
+ args.map { arg =>
+ arg.capitalize
+ }
+ }
+
+capitalizeAll("rarity", "applejack")
+
If you want a collection expanded into a vararg, add :_*
def repeatedParameterMethod(x: Int, y: String, z: Any*) = {
+ "%d %ss can give you %s".format(x, y, z.mkString(", "))
+ }
+
+repeatedParameterMethod(3, "egg", List("a delicious sandwich", "protein", "high cholesterol"):_*) should be(__)
+
As a precaution, the helpful @tailrec annotation will throw a compile time if a method is not tail recursive, +meaning that the last call and only call of the method is the recursive method. Scala optimizes recursive calls +to a loop from a stack
+import scala.annotation.tailrec // importing annotation!
+@tailrec // compiler will check that the function is tail recursive
+def factorial(i: BigInt): BigInt = {
+ @tailrec
+ def fact(i: BigInt, accumulator: BigInt): BigInt = { // methods can be placed inside in methods; return type is obligatory
+ if (i <= 1)
+ accumulator
+ else
+ fact(i - 1, i * accumulator)
+ }
+ fact(i, 1)
+ }
+
+factorial(3)
+
object FrenchDate {
+ def main(args: Array[String]) {
+ val now = new Date
+ val df = getDateInstance(LONG, Locale.FRANCE)
+ println(df format now) // Methods taking one argument can be used with an infix syntax. Equivalent to df.format(now)
+ }
+}
+
1 + 2 * 3 / x
consists exclusively of method calls, because it is equivalent to the following expression: (1).+(((2).*(3))./(x))
+This also means that +, *, etc. are valid identifiers in Scala.
Infix Operators do NOT work if an object has a method that takes two parameters.
+ val g: Int = 31
+ val s: String = g toHexString // Postfix operators work if an object has a method that takes no parameters
+
Prefix operators work if an object has a method name that starts with unary_
+class Stereo {
+ def unary_+ = "on"
+ def unary_- = "off"
+ }
+
+val stereo = new Stereo
++stereo // it is on
+
Methods with colons are right-associative, that means the object that a method is on will be on the right and the method parameter will be on the left
+class Foo (y:Int) {
+ def ~:(n:Int) = n + y + 3
+ }
+
+val foo = new Foo(9)
+10 ~: foo
+foo.~:(10) // same as
+
def lambda = (x: Int) => x + 1
+
+// other variants
+def lambda2 = { x: Int => x + 1 }
+val lambda3 = new Function1[Int, Int] {
+ def apply(v1: Int): Int = v1 + 1
+ }
+
+val everything = () => 42 // without parameter
+val add = (x: Int, y: Int) => x + y // multiple parameters
+
+(1 to 5).map(_*2) // underscore notation.
+(1 to 5) map (_*2) // same with infix sugar.
+(1 to 5).reduceLeft( _+_ ) // underscores are positionally matched 1st and 2nd args.
+(1 to 5).map( x => x*x ) // to use an arg twice, have to name it.
+(1 to 5).map { x => val y = x*2; println(y); y } // block style returns last expression.
+(1 to 5) filter {_%2 == 0} map {_*2} // pipeline style (works with parens too).
+
+// GOOD (1 to 5).map(2*)
+// BAD (1 to 5).map(*2) // anonymous function: bound infix method. Use 2*_ for sanity’s sake instead.
+
+def compose(g: R => R, h: R => R) = (x:R) => g(h(x))
+val f = compose({_*2}, {_-1}) // anonymous functions: to pass in multiple blocks, need outer parens.
+
Passing anonymous functions as parameter:
+ +Function returning another function using an anonymous function:
+ +Function Values:
+object Timer {
+ def oncePerSecond(callback: () => Unit) { // () => T is a Function type that takes a Unit type. Unit is known as 'void' to a Java programmer.
+ while (true) { callback(); Thread sleep 1000 }
+ }
+
+ def timeFlies() {
+ println("time flies like an arrow...")
+ }
+
+ def main(args: Array[String]) {
+ oncePerSecond(timeFlies) // function value; could also be () => timeFlies()
+ }
+}
+
This is used extensively in scala to create blocks.
+ def calc(x: => Int): Either[Throwable, Int] = { //x is a call by name parameter; delayed execution of x
+ try {
+ Right(x)
+ } catch {
+ case b: Throwable => Left(b)
+ }
+ }
+
+ val y = calc { //This looks like a natural block
+ println("Here we go!") //Some superfluous call
+ 49 + 20
+ }
+
By name parameters can also be used with an Object and apply to make interesting block-like calls
+object PigLatinizer {
+ def apply(x: => String) = x.tail + x.head + "ay"
+ }
+
+val result = PigLatinizer {
+ val x = "pret"
+ val z = "zel"
+ x ++ z //concatenate the strings
+ }
+
val zscore = (mean: R, sd: R) => (x:R) => (x-mean)/sd // currying, obvious syntax.
+def zscore(mean: R, sd: R) = (x: R) => (x-mean)/sd // currying, obvious syntax
+def zscore(mean: R, sd: R)(x: R) = (x-mean)/sd // currying, sugar syntax. but then:
+val normer = zscore(7, 0.4) _ // need trailing underscore to get the partial, only for the sugar version.
+def mapmake[T](g: T => T)(seq: List[T]) = seq.map(g) // generic type.
+
+def multiply(x: Int, y: Int) = x * y
+val multiplyCurried = (multiply _).curried
+multiply(4, 5)
+multiplyCurried(3)(2)
+
def adder(m: Int, n: Int) = m + n
+val add2 = adder(2, _:Int) // You can partially apply any argument in the argument list, not just the last one.
+add2(3) // which is 5
+
+val add3 = adder _ // underscore to convert from a function to a lambda
+adder(1, 9)
+add3(1, 9)
+
val doubleEvens: PartialFunction[Int, Int] = new PartialFunction[Int, Int] { // full declaration
+ //States that this partial function will take on the task
+ def isDefinedAt(x: Int) = x % 2 == 0
+
+ //What we do if this does partial function matches
+ def apply(v1: Int) = v1 * 2
+ }
+
+val tripleOdds: PartialFunction[Int, Int] = {
+ case x: Int if (x % 2) != 0 => x * 3 // syntaxic sugar (usual way)
+ }
+
+val whatToDo = doubleEvens orElse tripleOdds // combine the partial functions together: OrElse
+
+val addFive = (x: Int) => x + 5
+val whatToDo = doubleEvens orElse tripleOdds andThen addFive // chain (partial) functions together: andThen
+
class C(x: R) // constructor params - x is only available in class body
+class C(val x: R) // c.x constructor params - automatic public (immutable) member defined
+class D(var x: R) // you can define class with var or val parameters
+
+class C(var x: R) {
+ assert(x > 0, "positive please") // constructor is class body
+ var y = x // declare a public member
+ val readonly = 5 // declare a gettable but not settable member
+ private var secret = 1 // declare a private member
+ def this = this(42) // alternative constructor
+}
+
+new{ ... } // anonymous class
+abstract class D { ... } // define an abstract(non-createable) class.
+class C extends D { ... } // define an inherited class. Class hierarchy is linear, a class can only extend from one parent class
+class C(x: R) extends D(x) // inheritance and constructor params. (wishlist: automatically pass-up params by default)
+// A class can be placed inside another class
+object O extends D { ... } // define a singleton.
+
+trait T { ... } // traits. See below.
+class C extends T { ... }
+class C extends D with T { ... }
+
+// interfaces-with-implementation. no constructor params. mixin-able.
+trait T1; trait T2
+class C extends T1 with T2 // multiple traits.
+class C extends D with T1 with T2 // parent class and (multiple) trait(s).
+class C extends D { override def f = ...} // must declare method overrides.
+
+var c = new C(4) // Instantiation
+//BAD new List[Int]
+//GOOD List(1,2,3) // Instead, convention: callable factory shadowing the type
+
+classOf[String] // class literal.
+classOf[String].getCanonicalName
+classOf[String].getSimpleName
+val zoom = "zoom"
+zoom.getClass == classOf[String]
+
+x.isInstanceOf[String] // type check (runtime)
+x.asInstanceOf[String] // type cast (runtime)
+x: String // compare to parameter ascription (compile time)
+
class Complex(real: Double, imaginary: Double) {
+ def re = real // return type inferred automatically by the compiler
+ def im = imaginary // methods without arguments
+ def print(): Unit = println(s"$real + i * $imaginary")
+ override def toString() = "" + re + (if (im < 0) "" else "+") + im + "i" // override methods inherited from a super-class
+}
+
Asserts take a boolean argument and can take a message.
+ +def addNaturals(nats: List[Int]): Int = {
+ require(nats forall (_ >= 0), "List contains negative numbers")
+ nats.foldLeft(0)(_ + _)
+} ensuring(_ >= 0)
+
When a class is instantiated inside of another object, it belongs to the instance. This is a path dependent type. Once established, it cannot be placed inside of another object
+case class Board(length: Int, height: Int) {
+ case class Coordinate(x: Int, y: Int)
+}
+
+val b1 = Board(20, 20)
+val b2 = Board(30, 30)
+val c1 = b1.Coordinate(15, 15)
+val c2 = b2.Coordinate(25, 25)
+// val c1 = c2 won't work
+
Use A#B
for a Java-style inner class:
class Graph {
+ class Node {
+ var connectedNodes: List[Graph#Node] = Nil // accepts Nodes from any Graph
+ def connectTo(node: Graph#Node) {
+ if (connectedNodes.find(node.equals).isEmpty) {
+ connectedNodes = node :: connectedNodes
+ }
+ }
+ }
+ var nodes: List[Node] = Nil
+ def newNode: Node = {
+ val res = new Node
+ nodes = res :: nodes
+ res
+ }
+}
+
Static members (methods or fields) do not exist in Scala. Rather than defining static members, the Scala programmer declares these members in singleton objects, that is a class with a single instance.
+object TimerAnonymous {
+ def oncePerSecond(callback: () => Unit) {
+ while (true) { callback(); Thread sleep 1000 }
+ }
+ def main(args: Array[String]) {
+ oncePerSecond(() => println("time flies like an arrow..."))
+ }
+}
+
The apply method is a magical method in Scala.
+class Employee (val firstName:String, val lastName:String)
+
+object Employee {
+ def apply(firstName:String, lastName:String) = new Employee(firstName, lastName) // would also work in a class, but rarer
+}
+
+val a = Employee("John", "Doe")
+// is equivalent to
+var b = Employee.apply("John", "Doe")
+
new
keyword is not mandatory to create instances of these classes (i.e. one can write Const(5) instead of new Const(5)),case class Person(first: String, last: String, age: Int = 0) // Case classes can have default and named parameters
+val p1 = Person("Fred", "Jones") // new is optional
+val p2 = new Person("Fred", "Jones")
+p1 == p2 // true
+p1.hashCode == p2.hashCode // true
+p1 eq p2 // false
+val p3 = p2.copy(first = "Jane") // copy the case class but change the name in the copy
+
case class Dog(var name: String, breed: String) // Case classes can have mutable properties - potentially unsafe
+
Case classes can be disassembled to their constituent parts as a tuple:
+ +sealed trait Tree // or abstract class
+final case class Sum(l: Tree, r: Tree) extends Tree
+final case class Var(n: String) extends Tree
+final case class Const(v: Int) extends Tree
+
{ case "x" => 5 }
defines a partial function which, when given the string "x" as argument, returns the integer 5, and fails with an exception otherwise.
type Environment = String => Int // the type Environment can be used as an alias of the type of functions from String to Int
+
+def eval(t: Tree, env: Environment): Int = t match {
+ case Sum(l, r) => eval(l, env) + eval(r, env)
+ case Var(n) => env(n)
+ case Const(v) => v
+}
+
+def derive(t: Tree, v: String): Tree = t match {
+ case Sum(l, r) => Sum(derive(l, v), derive(r, v))
+ case Var(n) if (v == n) => Const(1) // guard, an expression following the if keyword.
+ case _ => Const(0) // wild-card, written _, which is a pattern matching any value, without giving it a name.
+}
+
// GOOD (xs zip ys) map { case (x,y) => x*y }
+// BAD (xs zip ys) map( (x,y) => x*y ) // use case in function args for pattern matching.
+// BAD
+val v42 = 42
+Some(3) match {
+ case Some(v42) => println("42")
+ case _ => println("Not 42")
+} // “v42” is interpreted as a name matching any Int value, and “42” is printed.
+// GOOD
+val v42 = 42
+Some(3) match {
+ case Some(`v42`) => println("42")
+ case _ => println("Not 42")
+} // ”`v42`” with backticks is interpreted as the existing val v42, and “Not 42” is printed.
+// GOOD
+val UppercaseVal = 42
+Some(3) match {
+ case Some(UppercaseVal) => println("42")
+ case _ => println("Not 42")
+} // UppercaseVal is treated as an existing val, rather than a new pattern variable, because it starts with an uppercase letter.
+// Thus, the value contained within UppercaseVal is checked against 3, and “Not 42” is printed.
+
val secondElement = List(1,2,3) match {
+ case x :: y :: xs => xs
+ case x :: Nil => x
+ case _ => 0
+ }
+
val MyRegularExpression = """a=([^,]+),\s+b=(.+)""".r //.r turns a String to a regular expression
+expr match {
+ case (MyRegularExpression(a, b)) => a + b
+ }
+
import scala.util.matching.Regex
+
+val numberPattern: Regex = "[0-9]".r
+
+numberPattern.findFirstMatchIn("awesomepassword") match {
+ case Some(_) => println("Password OK")
+ case None => println("Password must contain a number")
+}
+
With groups:
+val keyValPattern: Regex = "([0-9a-zA-Z-#() ]+): ([0-9a-zA-Z-#() ]+)".r
+
+for (patternMatch <- keyValPattern.findAllMatchIn(input))
+ println(s"key: ${patternMatch.group(1)} value: ${patternMatch.group(2)}")
+
class Car(val make: String, val model: String, val year: Short, val topSpeed: Short)
+
+object Car { // What is typical is to create a custom extractor in the companion object of the class.
+ def unapply(x: Car) = Some(x.make, x.model, x.year, x.topSpeed) // returns an Option[T]
+}
+
+val Car(a, b, c, d) = new Car("Chevy", "Camaro", 1978, 120) // assign values to a .. d
+
+val x = new Car("Chevy", "Camaro", 1978, 120) match { // pattern matching
+ case Car(s, t, _, _) => (s, t) // _ for variables we don't care about.
+ case _ => ("Ford", "Edsel") // fallback
+}
+
Avoid allocating runtime objects.
+class Wrapper(val underlying: Int) extends AnyVal {
+ def foo: Wrapper = new Wrapper(underlying * 19)
+}
+
It has a single, public val parameter that is the underlying runtime representation. The type at compile time is Wrapper, but at runtime, the representation is an Int. A value class can define defs, but no vals, vars, or nested traitss, classes or objects
+A value class can only extend universal traits and cannot be extended itself. A universal trait is a trait that extends Any, only has defs as members, and does no initialization. Universal traits allow basic inheritance of methods for value classes, but they incur the overhead of allocation.
+Apart from inheriting code from a super-class, a Scala class can also import code from one or several traits i.e. interfaces which can also contain code. +In Scala, when a class inherits from a trait, it implements that traits's interface, and inherits all the code contained in the trait.
+trait Ord {
+ def < (that: Any): Boolean // The type Any which is used above is the type which is a super-type of all other types in Scala
+ def <=(that: Any): Boolean = (this < that) || (this == that)
+ def > (that: Any): Boolean = !(this <= that)
+ def >=(that: Any): Boolean = !(this < that)
+}
+
+class Date(y: Int, m: Int, d: Int) extends Ord {
+ def year = y
+ def month = m
+ def day = d
+ override def toString(): String = year + "-" + month + "-" + day
+
+ override def equals(that: Any): Boolean =
+ that.isInstanceOf[Date] && {
+ val o = that.asInstanceOf[Date]
+ o.day == day && o.month == month && o.year == year
+ }
+
+ def <(that: Any): Boolean = { // The trait declare the type (e.g. method), where a concrete implementer will satisfy the type
+ if (!that.isInstanceOf[Date])
+ error("cannot compare " + that + " and a Date")
+ val o = that.asInstanceOf[Date](year < o.year) ||
+ (year == o.year && (month < o.month ||
+ (month == o.month && day < o.day )))
+ }
+}
+
Traits can have concrete implementations that can be mixed into concrete classes with its own state
+Traits can be mixed in during instantiation!
+trait Logging {
+ var logCache = List[String]()
+
+ def log(value: String) = {
+ logCache = logCache :+ value
+ }
+
+ def log = logCache
+ }
+val a = new A("stuff") with Logging // mixin traits during instantiation!
+a.log("I did something")
+a.log.size
+
abstract class IntQueue {
+ def get(): Int
+ def put(x: Int)
+}
+
+import scala.collection.mutable.ArrayBuffer
+
+class BasicIntQueue extends IntQueue {
+ private val buf = new ArrayBuffer[Int]
+ def get() = buf.remove(0)
+ def put(x: Int) { buf += x }
+}
+
+trait Doubling extends IntQueue {
+ abstract override def put(x: Int) { super.put(2 * x) } // abstract override is necessary to stack traits
+}
+
+class MyQueue extends BasicIntQueue with Doubling // could also mixin during instantiation
+
+val myQueue = new MyQueue
+myQueue.put(3)
+myQueue.get()
+
abstract override
.Use classes:
+Use traits:
+ abstract
+ case
+ catch
+ class
+ def
+ do
+ else
+ extends
+ false
+ final
+ finally
+ for
+ forSome
+ if
+ implicit
+ import
+ lazy
+ match
+ new
+ Null
+ object
+ override
+ package
+ private
+ protected
+ return
+ sealed
+ super
+ this
+ throw
+ trait
+ Try
+ true
+ type
+ val
+ var
+ while
+ with
+ yield
+ -
+ :
+ =
+ =>
+ <-
+ <:
+ <%
+ >:
+ #
+ @
+
Writing TDD unit tests with scalatest
+ +package com.acme.pizza
+
+import org.scalatest.FunSuite
+import org.scalatest.BeforeAndAfter
+
+class PizzaTests extends FunSuite with BeforeAndAfter {
+
+ var pizza: Pizza = _
+
+ before {
+ pizza = new Pizza
+ }
+
+ test("new pizza has zero toppings") {
+ assert(pizza.getToppings.size == 0)
+ }
+
+ test("adding one topping") {
+ pizza.addTopping(Topping("green olives"))
+ assert(pizza.getToppings.size === 1)
+ }
+
+ // mark that you want a test here in the future
+ test ("test pizza pricing") (pending)
+
+}
+
import org.scalatest.FunSuite
+class AddSuite extends FunSuite {
+ test("3 plus 3 is 6") {
+ assert((3 + 3) == 6)
+ }
+}
+
The structure of this test is flat—like xUnit, but the test name can be written in specification style:
+import org.scalatest.FlatSpec
+class AddSpec extends FlatSpec {
+ "Addition of 3 and 3" should "have result 6" in {
+ assert((3 + 3) == 0)
+ }
+}
+
import collection.mutable.Stack
+import org.scalatest._
+
+class ExampleSpec extends FlatSpec with Matchers {
+
+ "A Stack" should "pop values in last-in-first-out order" in {
+ val stack = new Stack[Int]
+ stack.push(1)
+ stack.push(2)
+ stack.pop() should be (2)
+ stack.pop() should be (1)
+ }
+
+ it should "throw NoSuchElementException if an empty stack is popped" in {
+ val emptyStack = new Stack[Int]
+ a [NoSuchElementException] should be thrownBy {
+ emptyStack.pop()
+ }
+ }
+}
+
import org.scalatest._
+
+class Calculator {
+ def add(a:Int, b:Int): Int = a + b
+}
+
+class CalcSpec extends FeatureSpec with GivenWhenThen {
+ info("As a calculator owner")
+ info("I want to be able add two numbers")
+ info("so I can get a correct result")
+ feature("Addition") {
+ scenario("User adds two numbers") {
+ Given("a calculator")
+ val calc = new Calculator
+ When("two numbers are added")
+ var result = calc.add(3, 3)
+ Then("we get correct result")
+ assert(result == 6)
+ }
+ }
+}
+
Type Refinement = "subclassing without naming the subclass".
+class Entity
+
+trait Persister {
+ def doPersist(e: Entity) = {
+ e.persistForReal()
+ }
+}
+
+// our refined instance (and type):
+val refinedMockPersister = new Persister {
+ override def doPersist(e: Entity) = ()
+}
+
class Reference[T] {
+ private var contents: T = _ // _ represents a default value. This default value is 0 for numeric types, false for the Boolean type, () for the Unit type and null for all object types.
+ def set(value: T) { contents = value }
+ def get: T = contents
+}
+
+trait Cache[K, V] {
+ def get(key: K): V
+ def put(key: K, value: V)
+ def delete(key: K)
+ }
+
+def remove[K](key: K) // function
+
Covariance +A
allow you to set the your container to a either a variable with the same type or parent type.
class MyContainer[+A](a: A)(implicit manifest: scala.reflect.Manifest[A]) {
+ private[this] val item = a
+
+ def get = item
+
+ def contents = manifest.runtimeClass.getSimpleName
+ }
+
+ val fruitBasket: MyContainer[Fruit] = new MyContainer[Orange](new Orange())
+ fruitBasket.contents
+
Contravariance -A
is the opposite of covariance
Declaring neither -/+, indicates invariance variance. You cannot use a superclass variable reference ("contravariant" position) or a subclass variable reference ("covariant" position) of that type.
+abstract class Pet extends Animal { def name: String }
+
+class Cat extends Pet {
+ override def name: String = "Cat"
+}
+
+class PetContainer[P <: Pet](p: P) {
+ def pet: P = p // The class PetContainer take a type parameter P which must be a subtype of Pet.
+}
+
Lower type bounds declare a type to be a supertype of another type. The term B >: A
expresses that the type parameter B or the abstract type B refer to a supertype of type A.
trait Container {
+ type T
+ val data: T
+
+ def compare(other: T) = data.equals(other)
+}
+
+class StringContainer(val data: String) extends Container {
+ override type T = String
+}
+
Generics:
+Abstract types:
+We can make a type infix, meaning that a generic type with two type parameters can be displayed between two types.
+The type specifier Pair[String,Int]
can be written as String Pair Int
.
class Pair[A, B](a: A, b: B)
+
+type ~[A,B] = Pair[A,B]
+val pairlist: List[String ~ Int] // operator-like usage
+
+case class Item[T](i: T) {
+ def ~(j: Item[T]) = new Pair(this, j) // creating an infix operator method to use with our infix type
+}
+
+(Item("a") ~ Item("b")).isInstanceOf[String ~ String]
+
import scala.language.reflectiveCalls // use reflection --> slow
+
+def onlyThoseThatCanPerformQuacks(quacker: {def quack:String}): String = {
+ "received message: %s".format(quacker.quack)
+ }
+
+type SpeakerAndMover = {def speak:String; def move(steps:Int, direction:String):String} // with type aliasing
+
Self-types are a way to declare that a trait must be mixed into another trait, even though it doesn’t directly extend it. That makes the members of the dependency available without imports.
+trait User {
+ def username: String
+}
+
+trait Tweeter {
+ this: User => // reassign this
+ def tweet(tweetText: String) = println(s"$username: $tweetText")
+}
+
+class VerifiedTweeter(val username_ : String) extends Tweeter with User { // We mixin User because Tweeter required it
+ def username = s"real $username_"
+}
+
There are two specific requirements that are created with self-types: +1. If B is extended, then you're required to mix-in an A. +1. When a concrete class finally extends/mixes-in these traits, some class/trait must implement A.
+ +If Tweeter was a subclass of User, there would be no error. In the code above, we required a User whenever Tweeter is used, however a User wasn't provided to Wrong, so we got an error.
+Inheritance using extends does not allow that.
+sealed trait Person
+trait Student extends Person
+trait Teacher extends Person
+trait Adult { this : Person => } // orthogonal to its condition
+
+val p : Person = new Student {}
+p match {
+ case s : Student => println("a student")
+ case t : Teacher => println("a teacher")
+} // that's it we're exhaustive
+
Implicits wrap around existing classes to provide extra functionality
+object MyPredef { // usually in a companion object
+
+ class IntWrapper(val original: Int) {
+ def isOdd = original % 2 != 0
+ def isEven = !isOdd
+ }
+
+ implicit def thisMethodNameIsIrrelevant(value: Int) = new IntWrapper(value)
+}
+
+import MyPredef._
+//imported implicits come into effect within this scope
+19.isOdd
+
+// Implicits can be used to automatically convert one type to another
+import java.math.BigInteger
+implicit def Int2BigIntegerConvert(value: Int): BigInteger = new BigInteger(value.toString)
+def add(a: BigInteger, b: BigInteger) = a.add(b)
+add(3, 6) // 3 and 6 are Int
+
+// Implicits function parameters
+def howMuchCanIMake_?(hours: Int)(implicit amount: BigDecimal, currencyName: String) = (amount * hours).toString() + " " + currencyName
+implicit var hourlyRate = BigDecimal(34.00)
+implicit val currencyName = "Dollars"
+howMuchCanIMake_?(30)
+
Default arguments though are preferred to Implicit Function Parameters.
+def inspect[T : TypeTag](l: List[T]) = typeOf[T].typeSymbol.name.decoded
+val list = 1 :: 2 :: 3 :: 4 :: 5 :: Nil
+inspect(list)
+
equivalent to
+def inspect[T](l: List[T])(implicit tt: TypeTag[T]) = tt.tpe.typeSymbol.name.decoded
+ val list = 1 :: 2 :: 3 :: 4 :: 5 :: Nil
+ inspect(list)
+
TypeTags can be used to determine a type used before it erased by the VM by using an implicit TypeTag argument.
+ + + + + + + + + + + + + + + + + + +Scaladoc +Scaladoc Style Guide
+ /** Start the comment here
+ * and use the left star followed by a
+ * white space on every line.
+ *
+ * Even on empty paragraph-break lines.
+ *
+ * Note that the * on each line is aligned
+ * with the second * in /** so that the
+ * left margin is on the same column on the
+ * first line and on subsequent ones.
+ *
+ * The closing Scaladoc tag goes on its own,
+ * separate line. E.g.
+ *
+ * Calculate the square of the given number
+ *
+ * @param d the Double to square
+ * @return the result of squaring d
+ */
+ def square(d: Double): Double = d * d
+
@constructor
placed in the class comment will describe the primary constructor.
+Method specific tags
@return
detail the return value from a method (one per method).
+Method, Constructor and/or Class tags
@throws
what exceptions (if any) the method or constructor may throw.
@param
detail a value parameter for a method or constructor, provide one per parameter to the method/constructor.
@tparam
detail a type parameter for a method, constructor or class. Provide one per type parameter.
@see
reference other sources of information like external document links or related entities in the documentation.
@note
add a note for pre or post conditions, or any other notable restrictions or expectations.
@example
for providing example code or related example documentation.
@usecase
provide a simplified method definition for when the full method definition is too complex or noisy. An example is (in the collections API), providing documentation for methods that omit the implicit canBuildFrom.
@group <group>
- mark the entity as a member of the
@groupname <group> <name>
- provide an optional name for the group.
@groupdesc <group> <description>
- add optional descriptive text to display under the group name. Supports multiline formatted text.
@groupprio
- control the order of the group on the page. Defaults to 0. Ungrouped elements have an implicit priority of 1000. Use a value between 0 and 999 to set a relative position to other groups. Low values will appear before high values.
@contentDiagram
- use with traits and classes to include a content hierarchy diagram showing included types. The diagram content can be fine tuned with additional specifiers taken from hideNodes, hideOutgoingImplicits, hideSubclasses, hideEdges, hideIncomingImplicits, hideSuperclasses and hideInheritedNode. hideDiagram can be supplied to prevent a diagram from being created if it would be created by default. Packages and objects have content diagrams by default.
@inheritanceDiagram
@author
provide author information for the following entity
@version
the version of the system or API that this entity is a part of.
@since
like @version
but defines the system or API that this entity was first defined in.
@todo
for documenting unimplemented features or unimplemented aspects of an entity.
@deprecated
marks the entity as deprecated, providing both the replacement implementation that should be used and the version/date at which this entity was deprecated.
@migration
like deprecated but provides advanced warning of planned changes ahead of deprecation. Same fields as @deprecated
.
@inheritdoc
take comments from a superclass as defaults if comments are not provided locally.
@documentable
Expand a type alias and abstract type into a full template page. - TODO: Test the “abstract type” claim - no examples of this in the Scala code base
@define <name> <definition>
allows use of $name in other Scaladoc comments within the same source file which will be expanded to the contents of <definition>
.
`monospace`
+ ''italic text''
+ '''bold text'''
+ __underline__
+ ^superscript^
+ ,,subscript,,
+ [[entity link]], e.g. [[scala.collection.Seq]]
+ [[https://external.link External Link]],
+ e.g. [[https://scala-lang.org Scala Language Site]]
+
Paragraphs are started with one (or more) blank lines.
+*
in the margin for the comment is valid (and should be included) but the line should be blank otherwise.
Code blocks are contained within {{{ this }}} and may be multi-line.
+Indentation is relative to the starting * for the comment.
+Headings are defined with surrounding = characters, with more = denoting subheadings. E.g. =Heading=, ==Sub-Heading==, etc.
+List blocks are a sequence of list items with the same style and level, with no interruptions from other block styles. Unordered lists can be bulleted using -, while numbered lists can be denoted using 1., i., I., a. for the various numbering styles.
+ + + + + + + + + + + + + + + +sbt uses the same directory structure as Maven for source files by default (all paths are relative to the base directory):
+src/
+ main/
+ resources/
+ <files to include in main jar here>
+ scala/
+ <main Scala sources>
+ java/
+ <main Java sources>
+ test/
+ resources
+ <files to include in test jar here>
+ scala/
+ <test Scala sources>
+ java/
+ <test Java sources>
+
Other directories in src/
will be ignored. Additionally, all hidden directories will be ignored.
Source code can be placed in the project’s base directory as hello/app.scala
, which may be for small projects, though for normal projects people tend to keep the projects in the src/main/ directory to keep things neat.
The build definition goes in a file called build.sbt
, located in the project’s base directory. The “base directory” is the directory containing the project.
+In addition to build.sbt
, the project directory can contain .scala files that defines helper objects and one-off plugins.
.gitignore
(or equivalent for other version control systems) should contain:
As part of your build definition, specify the version of sbt
that your build uses. This allows people with different versions of the sbt launcher to build the same projects with consistent results.
+To do this, create a file named project/build.properties
that specifies the sbt version as follows:
A build definition is defined in build.sbt, and it consists of a set of projects (of type Project). Because the term project can be ambiguous, we often call it a subproject.
+lazy val root = (project in file("."))
+ .settings(
+ name := "Hello",
+ scalaVersion := "2.12.3"
+ )
+
Each subproject is configured by key-value pairs.
+build.sbt
may also be interspersed with vals, lazy vals, and defs. Top-level objects and classes are not allowed in build.sbt
.
+Those should go in the project/
directory as Scala source files.
There are three flavors of key:
+SettingKey[T]: a key for a value computed once (the value is computed when loading the subproject, and kept around).
+TaskKey[T]: a key for a value, called a task, that has to be recomputed each time, potentially with side effects.
+InputKey[T]: a key for a task that has command line arguments as input. Check out Input Tasks for more details.
+
The built-in keys are just fields in an object called Keys. A build.sbt
implicitly has an import sbt.Keys._
, so sbt.Keys.name can be referred to as name.
To depend on third-party libraries, there are two options. The first is to drop jars in lib/
(unmanaged dependencies) and the other is to add managed dependencies, which will look like this in build.sbt
:
val derby = "org.apache.derby" % "derby" % "10.4.1.3"
+
+lazy val commonSettings = Seq(
+ organization := "com.example",
+ version := "0.1.0-SNAPSHOT",
+ scalaVersion := "2.12.3"
+)
+
+lazy val root = (project in file("."))
+ .settings(
+ commonSettings,
+ name := "Hello",
+ libraryDependencies += derby
+ )
+
The libraryDependencies key involves two complexities: +=
rather than :=
, and the %
method. +=
appends to the key’s old value rather than replacing it.
+The %
method is used to construct an Ivy module ID from strings.
cluster.name
in the elasticsearch.yml
configurationcd elasticsearch-<version>
+./bin/elasticsearch -d
+# or on Windows
+# bin\elasticsearch.bat
+curl 'https://localhost:9200/?pretty'
+
Install Kibana
+config/kibana.yml
in an editor./bin/kibana
(orbin\kibana.bat on Windows)Install Sense
+On Windows:
+ +Then go to
+https://localhost:5601/app/sense
+verb is GET, POST, PUT, HEAD, or DELETE
+curl -XGET <id>.us-west-2.es.amazonaws.com
+
+curl -XGET 'https://<id>.us-west-2.es.amazonaws.com/_count?pretty' -d '{ "query": { "match_all": {} } }'
+
+curl -XPUT https://<id>.us-west-2.es.amazonaws.com/movies/movie/tt0116996 -d '{"directors" : ["Tim Burton"],"genres" : ["Comedy","Sci-Fi"], "plot": "The Earth is invaded by Martians with irresistible weapons and a cruel sense of humor.", "title" : "Mars Attacks!", "actors" :["Jack Nicholson","Pierce Brosnan","Sarah Jessica Parker"], "year" : 1996}'
+
Sense syntax is similar to curl:
+Index a document
+ +and retrieve it
+ +URL pattern
+https://yournode:9200/_plugin/<plugin name>
On Debian, the script is in: /usr/share/elasticsearch/bin/plugin
.
Install various plugins
+./bin/plugin --install mobz/elasticsearch-head
+./bin/plugin --install lmenezes/elasticsearch-kopf/1.2
+./bin/plugin --install elasticsearch/marvel/latest
+
List installed plugins
+ + +Elasticsearch monitoring and management plugins
+Head
+ +elasticsearch/bin/plugin -install mobz/elasticsearch-head
BigDesk
+Live charts and statistics for elasticsearch cluster: +BigDesk
+Kopf
+ + +Marvel
+ +Aspire
+ +Aspire is a framework and libraries of extensible components designed to enable creation of solutions to acquire data from one or more content repositories (such as file systems, relational databases, cloud storage, or content management systems), extract metadata and text from the documents, analyze, modify and enhance the content and metadata if needed, and then publish each document, together with its metadata, to a search engine or other target application
+ +Integration with Hadoop
+ +Bulk loading for elastic search https://infochimps.com
+Integration with Spring
+ +WordPress
+ +BI platforms that can use ES as an analytics engine:
+Adminer
+Database management in a single PHP file. Works with MySQL, PostgreSQL, SQLite, MS SQL, Oracle, SimpleDB, Elasticsearch, MongoDB. Needs a webserver + PHP: WAMP
+Mongolastic
+Elasticsearch-exporter
+An Elasticsearch cluster can contain multiple indices, which in turn contain multiple types. These types hold multiple documents, and each document has multiple fields.
+GET _stats/
# List indices
+ +# Get info about one index
+ +The available features are _settings,_mappings, _warmers and_aliases
+# cluster
+ +# insert data
+PUT my_index/user/1
+{
+"first_name": "John",
+"last_name": "Smith",
+"date_of_birth": "1970-10-24"
+}
+
#search
+ +# Data schema
+ +PUT /index/type/ID
+PUT /megacorp/employee/1
+{ "first_name" : "John", "last_name" : "Smith", "age" : 25, "about" : "I love to go rock climbing", "interests": [ "sports", "music" ]}
+
+PUT /megacorp/employee/2
+{ "first_name" : "Jane", "last_name" : "Smith", "age" : 32, "about" : "I like to collect rock albums", "interests": [ "music" ]}
+
+GET /megacorp/employee/1
+
Field names can be any valid string, but may not include periods. +Every document in Elasticsearch has a version number. Every time a change is made to a document (including deleting it), the _version number is incremented.
+Optimistic concurrency control
+PUT /website/blog/1?version=1 { "title": "My first blog entry", "text": "Starting to get the hang of this..."}
+
+We want this update to succeed only if the current _version of this document in our index is version 1
+
+External version:
+
+PUT /website/blog/2?version=5&version_type=external { "title": "My first external blog entry", "text": "Starting to get the hang of this..."}
+
POST /website/blog/
+{
+"title": "My second blog entry",
+"text": "Still trying this out...",
+"date": "2014/01/01"
+}
+
Response:
+{
+"_index": "website",
+"_type": "blog",
+"_id": "AVFgSgVHUP18jI2wRx0w",
+"_version": 1,
+"created": true
+}
+
# creating an entirely new document and not overwriting an existing one
+ +{ "_index" : "website", "_type" : "blog", "_id" : "123", "_version" : 1, "found" : true, "_source" : { "title": "My first blog entry", "text": "Just trying this out...", "date": "2014/01/01" }}
+# Contains just the fields that we requested
+ +# Just get the original doc
+ +# check if doc exists -- HTTP 200 or 404
+ +# Note: HEAD/exists requests do not work in Sense +# because they only return HTTP headers, not +# a JSON body
+# multiple docs at once
+ +Documents in Elasticsearch are immutable; we cannot change them. Instead, if we need to update an existing document, we reindex or replace it
+# Accepts a partial document as the doc parameter, which just gets merged with the existing document.
+ +# Script
+ +# script with parameters
+POST /website/blog/1/_update
+{ "script" : "ctx._source.tags+=new_tag", "params" : { "new_tag" : "search" }}
+
# upsert
+ +# delete doc based on its contents
+POST /website/blog/1/_update { "script" : "ctx.op = ctx._source.views == count ? 'delete' : 'none'", "params" : { "count": 1 }}
+
POST /_bulk
+{"delete":{"_index":"website","_type":"blog","_id":"123"}}
+{"create":{"_index":"website","_type":"blog","_id":"123"}} # Create a document only if the document does not already exist
+{"title":"My first blog post"}
+{"index":{"_index":"website","_type":"blog"}}
+{"title":"My second blog post"}
+{"update":{"_index":"website","_type":"blog","_id":"123","_retry_on_conflict":3}}
+{"doc":{"title":"My updated blog post"}}
+
Bulk in the same index or index/type
+POST /website/_bulk
+{"index":{"_type":"log"}}
+{"event":"User logged in"}
+{"index":{"_type":"blog"}}
+{"title":"My second blog post"}
+
Try around 5-15MB in size.
+Every field in a document is indexed and can be queried.
+# Search for all employees in the megacorp index:
+ +# Search for all employees in the megacorp index +# who have "Smith" in the last_name field
+ +# Same query as above, but using the Query DSL
+ +# SEARCH QUERY STRING
+ +Don't forget to URL encode special characters e.g. +name:john +tweet:mary
+ +The + prefix indicates conditions that must be satisfied for our query to match. Similarly a - prefix would indicate conditions that must not match. All conditions without a + or - are optional
+ +When used in filtering context, the query is said to be a "non-scoring" or "filtering" query. That is, the query simply asks the question: "Does this document match?". The answer is always a simple, binary yes|no. +When used in a querying context, the query becomes a "scoring" query.
+# Find all employees whose `last_name` is Smith
+# and who are older than 30
+GET /megacorp/employee/_search
+{
+"query" : {
+ "filtered" : {
+ "filter" : {
+ "range" : {
+ "age" : { "gt" : 30 }
+ }
+ },
+ "query" : {
+ "match" : {
+ "last_name" : "smith"
+ }
+ }
+ }
+}
+}
+
# Find all employees who enjoy "rock" or "climbing"
+GET /megacorp/employee/_search
+{
+"query" : {
+ "match" : {
+ "about" : "rock climbing"
+ }
+}
+}
+
The match query should be the standard query that you reach for whenever you want to query for a full-text or exact value in almost any field. +If you run a match query against a full-text field, it will analyze the query string by using the correct analyzer for that field before executing the search +If you use it on a field containing an exact value, such as a number, a date, a Boolean, or a not_analyzedstring field, then it will search for that exact value
+# Find all employees who enjoy "rock climbing"
+GET /megacorp/employee/_search
+{
+"query" : {
+ "match_phrase" : {
+ "about" : "rock climbing"
+ }
+}
+}
+
# EXACT VALUES
+The term query is used to search by exact values, be they numbers, dates, Booleans, or not_analyzed exact-value string fields
+The terms query is the same as the term query, but allows you to specify multiple values to match. If the field contains any of the specified values, the document matches
+ +# Compound Queries
+{ + "bool": { + "must": { "match": { "tweet": "elasticsearch" }}, + "must_not": { "match": { "name": "mary" }}, + "should": { "match": { "tweet": "full text" }}, + "filter": { "range": { "age" : { "gt" : 30 }} } + } + }
+# VALIDATE A QUERY
+GET /gb/tweet/_validate/query?explain { "query": { "tweet" : { "match" : "really powerful" } }}
+# understand why one particular document matched or, more important, why it didn’t match
+GET /us/tweet/12/_explain { "query" : { "bool" : { "filter" : { "term" : { "user_id" : 2 }}, "must" : { "match" : { "tweet" : "honeymoon" }} } }}
+# all documents all indices
+
/_search
+/gb,us/_search + Search all types in the gb and us indices
+/g,u/_search + Search all types in any indices beginning with g or beginning with u
+/gb/user/_search + Search type user in the gb index
+/gb,us/user,tweet/_search + Search types user and tweet in the gb and us indices
+/_all/user,tweet/_search + Search types user and tweet in all indices
+ GET /_search?size=5GET /_search?size=5&from=5
+
GET /_search { "query" : { "bool" : { "filter" : { "term" : { "user_id" : 1 }} } }, "sort": { "date": { "order": "desc" }}}
+
For string sorting, use multi-field mapping:
+ "tweet": { "type": "string", "analyzer": "english", "fields": { "raw": {"type": "string", "index": "not_analyzed" } }}
+
The main tweet field is just the same as before: an analyzed full-text field. +The new tweet.raw subfield is not_analyzed.
+then sort on the new field
+ GET /_search { "query": { "match": { "tweet": "elasticsearch" } }, "sort": "tweet.raw"}
+
# Find all employees who enjoy "rock climbing" - highlights +# and highlight the matches
+GET /megacorp/employee/_search + { + "query" : { + "match_phrase" : { + "about" : "rock climbing" + } + }, + "highlight": { + "fields" : { + "about" : {} + } + } + }
+An analyzer is really just a wrapper that combines three functions into a single package:
+* Character filters
+* Tokenizer
+* Token filters
+
# See how text is analyzed
+GET /_analyze { "analyzer": "standard", "text": "Text to analyze"}
+# test analyzer
+GET /gb/_analyze { "field": "tweet", "text": "Black-cats"}
+Every type has its own mapping, or schema definition. A mapping defines the fields within a type, the datatype for each field, and how the field should be handled by Elasticsearch. A mapping is also used to configure metadata associated with the type.
+You can control dynamic nature of mappings
+Mapping (or schema definition) for the tweet type in the gb index
+ +Elasticsearch supports the following simple field types:
+Fields of type string are, by default, considered to contain full text. That is, their value will be passed through an analyzer before being indexed, and a full-text query on the field will pass the query string through an analyzer before searching. +The two most important mapping attributes for string fields are index and analyzer.
+The index attribute controls how the string will be indexed. It can contain one of three values:
+If we want to map the field as an exact value, we need to set it to not_analyzed:
+ +For analyzed string fields, use the analyzer attribute to specify which analyzer to apply both at search time and at index time. By default, Elasticsearch uses the standard analyzer, but you can change this by specifying one of the built-in analyzers, such as whitespace, simple, or english:
+ +# create a new index, specifying that the tweet field should use the english analyzer
+PUT /gb + { "mappings": + { "tweet" : + { "properties" : { + "tweet" : { "type" : "string", "analyzer": "english" }, + "date" : { "type" : "date" }, + "name" : { "type" : "string" }, + "user_id" : { "type" : "long" } + }}}}
+null, arrays, objects: see complex core fields
+DELETE /test_index
+
+PUT /test_index
+{
+ "mappings": {
+ "parent_type": {
+ "properties": {
+ "num_prop": {
+ "type": "integer"
+ },
+ "str_prop": {
+ "type": "string"
+ }
+ }
+ },
+ "child_type": {
+ "_parent": {
+ "type": "parent_type"
+ },
+ "properties": {
+ "child_num": {
+ "type": "integer"
+ },
+ "child_str": {
+ "type": "string"
+ }
+ }
+ }
+ }
+}
+
+POST /test_index/_bulk
+{"index":{"_type":"parent_type","_id":1}}
+{"num_prop":1,"str_prop":"hello"}
+{"index":{"_type":"child_type","_id":1,"_parent":1}}
+{"child_num":11,"child_str":"foo"}
+{"index":{"_type":"child_type","_id":2,"_parent":1}}
+{"child_num":12,"child_str":"bar"}
+{"index":{"_type":"parent_type","_id":2}}
+{"num_prop":2,"str_prop":"goodbye"}
+{"index":{"_type":"child_type","_id":3,"_parent":2}}
+{"child_num":21,"child_str":"baz"}
+
+POST /test_index/child_type/_search
+
+POST /test_index/child_type/2?parent=1
+{
+ "child_num": 13,
+ "child_str": "bars"
+}
+
+POST /test_index/child_type/_search
+
+POST /test_index/child_type/3/_update?parent=2
+{
+ "script": "ctx._source.child_num+=1"
+}
+
+POST /test_index/child_type/_search
+
+POST /test_index/child_type/_search
+{
+ "query": {
+ "term": {
+ "child_str": {
+ "value": "foo"
+ }
+ }
+ }
+}
+
+POST /test_index/parent_type/_search
+{
+ "query": {
+ "filtered": {
+ "query": {
+ "match_all": {}
+ },
+ "filter": {
+ "has_child": {
+ "type": "child_type",
+ "filter": {
+ "term": {
+ "child_str": "foo"
+ }
+ }
+ }
+ }
+ }
+ }
+}
+
Aggregations and searches can span multiple indices
+# Calculate the most popular interests for all employees
+GET /megacorp/employee/_search + { + "aggs": { + "all_interests": { + "terms": { + "field": "interests" + } + } + } + }
+# Calculate the most popular interests for +# employees named "Smith"
+GET /megacorp/employee/_search + { + "query": { + "match": { + "last_name": "smith" + } + }, + "aggs": { + "all_interests": { + "terms": { + "field": "interests" + } + } + } + }
+# Calculate the average age of employee per interest - hierarchical aggregates
+GET /megacorp/employee/_search + { + "aggs" : { + "all_interests" : { + "terms" : { "field" : "interests" }, + "aggs" : { + "avg_age" : { + "avg" : { "field" : "age" } + } + } + } + } + }
+# requires in config/elasticsearch.yml +# script.inline: true +# script.indexed: true
+GET /tlo/contacts/_search + { + "size" : 0, + "query": { + "constant_score": { + "filter": { + "terms": { + "version": [ + "20160301", + "20160401" + ] + } + } + } + }, + "aggs": { + "counts": { + "cardinality": { + "script": "doc['first_name'].value + ' ' + doc['last_name'].value + ' ' + doc['company'].value", + "missing": "N/A" + } + } + } + }
+By default, indices are assigned five primary shards. The number of primary shards can be set only when an index is created and never changed
+# Add an index
+ PUT /blogs { "settings" : { "number_of_shards" : 3, "number_of_replicas" : 1 }}
+ PUT /blogs/_settings { "number_of_replicas" : 2}
+
GET /_cluster/health
+
yaml file
+Sets the JVM heap size to 0.5 memory size. The OS will use it for file system cache
+to override the configuration file
+Sites plugins -- kopf / head / paramedic / bigdesk / kibana +* contain static web content (JS, HTML....)
+Install plugins on ALL machines of the cluster
+To install,
+ +One type per index is recommended, except for parent child / nested indexes.
+index size optimization:
+_source
and _all
(the index that captures every field - not needed unless the top search bar changes)_all
data types: +string, number, bool, datetime, binary, array, object, geo_point, geo_shape, ip, multifield +binary should be base64 encoded before storage
+Steps to restore elastic search data:
+The commands to do the above are as below:
+systemctl stop elasticsearch
systemctl start elasticsearch
systemctl daemon-reload elasticsearch
logstash -w 4
to set the number of worker threads
Use path.data
to distribute the data on multiple (EBS) disks
Use "date" for normalizing dates:
+filter {
+ date{
+ timezone => "America/Los_Angeles"
+ locale => "en" # English
+
+ }
+ geoip {
+
+ source => "clientip" # will read from clientip field
+ database => ... # use MaxMind's GeoLiteCity by default
+ }
+ useragent {
+
+ }
+}
+
filter
and outputs
¶Sublime Text
+for Python
+Docker
+Help
> Install New Software
.Search Anywhere Double Shift
+Got to file Ctrl + Shift + N
+Recent files Ctrl + E
+Code Completion Ctrl + Space
+Parameters Ctrl + P
+Highlight usages in file Ctrl + Shift + F7
+Declaration of the current method Alt + Q
+Code Templates Ctrl + J
+
<!-- Latest compiled and minified CSS --><link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
+
+<!-- Optional theme --><link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap-theme.min.css" integrity="sha384-fLW2N01lMqjakBkx3l/M9EahuwpSfeNvV63J5ezn3uZzapT0u7EYsXMjQV+0En5r" crossorigin="anonymous">
+
+<!-- Latest compiled and minified JavaScript --><script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" crossorigin="anonymous"></script>
+
Certain "cross-domain" requests, notably AJAX requests, are forbidden by default by the same-origin security policy of web browsers.
+The same-origin policy is an important security concept implemented by web browsers to prevent Javascript code from making requests against a different origin (e.g., different domain, more precisely combination of URI scheme, hostname, and port number ) than the one from which it was served. Although the same-origin policy is effective in preventing resources from different origins, it also prevents legitimate interactions between a server and clients of a known and trusted origin.
+Cross-Origin Resource Sharing (CORS) is a technique for relaxing the same-origin policy, allowing Javascript on a web page to consume a REST API served from a different origin.
+Cross-origin requests come in two flavors:
+Simple requests are requests that meet the following criteria:
+HTTP Method matches (case-sensitive) one of:
+HTTP Headers matches (case-insensitive):
+application/x-www-form-urlencoded
, multipart/form-data
, text/plain
A not-so-simple request looks like a single request to the client, but it actually consists of two requests under the hood. The browser first issues a preflight request, which is like asking the server for permission to make the actual request. Once permissions have been granted, the browser makes the actual request. The browser handles the details of these two requests transparently. The preflight response can also be cached so that it is not issued on every request.
+Some Javascript libraries, such as AngularJS and Sencha Touch, send preflight requests for any kind of request. This approach is arguably safer, because it doesn't assume that a service adheres to HTTP method semantics (i.e., a GET endpoint could have been written to have side effects.)
+The jQuery library exposes its methods and properties via two properties of the window object called jQuery and $. $ is simply an alias for jQuery and it's often employed because it's shorter and faster to write.
+Either directly
+<!doctype html>
+<html>
+<head>
+<meta charset="utf-8">
+<title>Demo</title>
+</head>
+<body>
+<a href="https://jquery.com/">jQuery</a>
+<script src="jquery.js"></script>
+<script>
+// Your code goes here.
+</script>
+</body>
+
or via a CDN
+<head>
+<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
+</head>
+
$( document ).ready(function() {
+// Your code here.
+});
+
+// Shorthand for $( document ).ready()
+$(function() {
+console.log( "ready!" );
+});
+
$( "#myId" ); // Note IDs must be unique per page.
+$( ".myClass" );
+$( "input[name='first_name']" );
+$( "#contents ul.people li" );
+$( "div.myClass, ul.people" );
+
If you have a variable containing a DOM element, and want to select elements related to that DOM element, simply wrap it in a jQuery object.
+var myDomElement = document.getElementById( "foo" ); // A plain DOM element.
+$( myDomElement ).find( "a" ); // Finds all anchors inside the DOM element.
+
A jQuery object is an array-like wrapper around one or more DOM elements.
+ +// Testing whether a selection contains elements.
+ +$( "div.foo" ).has( "p" ); // div.foo elements that contain <p> tags
+$( "h1" ).not( ".bar" ); // h1 elements that don't have a class of bar
+$( "ul li" ).filter( ".current" ); // unordered list items with class of current
+$( "ul li" ).first(); // just the first unordered list item
+$( "ul li" ).eq( 5 ); // the sixth
+
+$( "form :checked" ); // :checked targets checked checkboxes
+
Get the <button>
element with the class 'continue' and change its HTML to 'Next Step...'
$( "#content" ).find( "h3" ).eq( 2 ).html( "new text for the third h3!" );
+$( "#content" )
+.find( "h3" )
+.eq( 2 )
+.html( "new text for the third h3!" )
+.end() // Restores the selection to all h3s in #content
+.eq( 0 )
+.html( "new text for the first h3!" );
+
Many jQuery methods implicitly iterate over the entire collection, applying their behavior to each matched element. In most cases, the "getter" signature returns the result from the first element in a jQuery collection while the setter acts over the entire collection of matched elements.
+$( "li" ).addClass( "newClass" ); // Each <li> in the document will have the class "newClass" added.
+
$( "a" ).addClass( "test" );
+$( "a" ).removeClass( "test" );
+
+$( "div" ).click(function() {
+if ( $( this ).hasClass( "protected" ) ) {
+$( this )
+.animate({ left: -10 })
+.animate({ left: 10 })
+.animate({ left: -10 })
+.animate({ left: 10 })
+.animate({ left: 0 });
+}
+});
+
+if ( $( "#myDiv" ).is( ".pretty.awesome" ) ) {
+$( "#myDiv" ).show();
+}
+var isVisible = $( "#myDiv" ).is( ":visible" );
+if ( $( "#myDiv" ).is( ":hidden" ) ) {
+$( "#myDiv" ).show();
+}
+
$( "a" ).attr( "href", "allMyHrefsAreTheSameNow.html" );
+$( "a" ).attr({
+title: "all titles are the same too!",
+href: "somethingNew.html"
+});
+
Getting CSS properties.
+$( "h1" ).css( "fontSize" ); // Returns a string such as "19px".$( "h1" ).css( "font-size" ); // Also works.
+
Setting CSS properties.
+$( "h1" ).css( "fontSize", "100px" ); // Setting an individual property.// Setting multiple properties.$( "h1" ).css({fontSize: "100px",color: "red"});
+
Storing and retrieving data related to an element.
+$( "#myDiv" ).data( "keyName", { foo: "bar" } );
+$( "#myDiv" ).data( "keyName" ); // Returns { foo: "bar" }
+
Storing a relationship between elements using .data()
+$( "#myList li" ).each(function() {
+var li = $( this );
+var div = li.find( "div.content" );
+li.data( "contentDiv", div );
+});
+
Later, we don't have to find the div again; we can just read it from the list item's data
+ +.trim, .each, .map, inArray, isArray, isFunction, isNumeric, .type
+Returns "lots of extra whitespace"
+ +$.each([ "foo", "bar", "baz" ], function( idx, val ) {
+console.log( "element " + idx + " is " + val );
+});
+$.each({ foo: "bar", baz: "bim" }, function( k, v ) {
+console.log( k + " : " + v );
+});
+
HOWEVER, use this form for jQuery objects
+$( "li" ).each( function( index, element ){
+console.log( $( this ).text() );
+});
+
+var myArray = [ 1, 2, 3, 5 ];
+if ( $.inArray( 4, myArray ) !== -1 ) {
+console.log( "found it!" );
+}
+
$.isArray([]); // true
+$.isFunction(function() {}); // true
+$.isNumeric(3.14); // true
+
+$.type( true ); // "boolean"
+$.type( 3 ); // "number"
+$.type( "test" ); // "string"
+$.type( function() {} ); // "function"
+$.type( new Boolean() ); // "boolean"
+$.type( new Number(3) ); // "number"
+$.type( new String('test') ); // "string"
+$.type( new Function() ); // "function"
+$.type( [] ); // "array"
+$.type( null ); // "null"
+$.type( /test/ ); // "regexp"
+$.type( new Date() ); // "date"
+
<li id="a"></li>
+<li id="b"></li>
+<li id="c"></li>
+<script>
+var arr = [{
+id: "a",
+tagName: "li"
+}, {
+id: "b",
+tagName: "li"
+}, {
+id: "c",
+tagName: "li"
+}];
+
+// Returns [ "a", "b", "c" ]
+$( "li" ).map( function( index, element ) {
+return element.id;
+}).get();
+
+// Also returns [ "a", "b", "c" ]
+// Note that the value comes first with $.map
+$.map( arr, function( value, index ) {
+return value.id;
+});
+
var hiddenBox = $( "#banner-message" );
+$( "#button-container button" ).on( "click", function( event ) {
+hiddenBox.show();
+});
+
The on
method is useful for binding the same handler function to multiple events, when you want to provide data to the event handler, when you are working with custom events, or when you want to pass an object of multiple events and handlers.
The event object is most commonly used to prevent the default action of the event via the .preventDefault()
method. However, the event object contains a number of other useful properties and methods, including:
pageX, pageY, type, which, data
+
Use this code to inspect it in your browser console
+$( "div" ).on( "click", function( event ) {
+console.log( "event object:" );
+console.dir( event );
+});
+
$( "a" ).click(function( eventObject ) {
+var elem = $( this );
+if ( elem.attr( "href" ).match( /evil/ ) ) {
+eventObject.preventDefault();
+elem.addClass( "evil" );
+}
+});
+
.on()
method with data¶$( "input" ).on(
+"change",
+{ foo: "bar" }, // Associate data with event binding
+function( eventObject ) {
+console.log("An input value has changed! ", eventObject.data.foo);
+}
+);
+
// Binding multiple events with different handlers
+$( "p" ).on({
+"click": function() { console.log( "clicked!" ); },
+"mouseover": function() { console.log( "hovered!" ); }
+});
+
// Tearing down all click handlers on a selection
+ +// As of jQuery 1.7, attach an event handler to the body
element that
+// is listening for clicks, and will respond whenever any button is
+// clicked on the page.
// An alternative to the previous example, using slightly different syntax.
+ +// Attach a delegated event handler with a more refined selector
+$( "#list" ).on( "click", "a[href^='http']", function( event ) {
+$( this ).attr( "target", "_blank" );
+});
+
Instantaneously hide all paragraphs
+ +Instantaneously show all divs that have the hidden style class
+ +Instantaneously toggle the display of all paragraphs
+ +Fade in all hidden paragraphs; then add a style class to them (correct with animation callback)
+$( "p.hidden" ).fadeIn( 750, function() {
+// this = DOM element which has just finished being animated
+$( this ).addClass( "lookAtMe" );
+});
+
$.ajax({
+url: "/api/getWeather",
+data: {
+zipcode: 97201
+},
+success: function( result ) {
+$( "#weather-temp" ).html( "<strong>" + result + "</strong> degrees" );
+}
+});
+
Using the core $.ajax() method
+$.ajax({
+// The URL for the request
+url: "post.php",
+// The data to send (will be converted to a query string)
+data: {
+id: 123
+},
+// Whether this is a POST or GET request
+type: "GET",
+// The type of data we expect back
+dataType : "json",
+})
+// Code to run if the request succeeds (is done);
+// The response is passed to the function
+.done(function( json ) {
+$( "<h1>" ).text( json.title ).appendTo( "body" );
+$( "<div class=\"content\">").html( json.html ).appendTo( "body" );
+})
+// Code to run if the request fails; the raw request and
+// status codes are passed to the function
+.fail(function( xhr, status, errorThrown ) {
+alert( "Sorry, there was a problem!" );
+console.log( "Error: " + errorThrown );
+console.log( "Status: " + status );
+console.dir( xhr );
+})
+
Code to run regardless of success or failure;
+ +Simple convenience methods such as $.get(), $.getScript(), $.getJSON(), $.post(), and $().load().
+$.get( "myhtmlpage.html", myCallBack ); // myCallback needs to be a parameterless function
+# with parameters
+$.get( "myhtmlpage.html", function() {
+myCallBack( param1, param2 );
+});
+
Using .load() to populate an element
+ +Using .load() to populate an element based on a selector
+$( "#newContent" ).load( "/foo.html #myDiv h1:first", function( html ) {
+alert( "Content updated!" );
+});
+
Turning form data into a query string
+ +// Creates a query string like this: +// field_1=something&field2=somethingElse
+Create an array of objects containing form data
+ +Use validation to check for the presence of an input
+$( "#form" ).submit(function( event ) {
+// If .required's value's length is zero
+if ( $( ".required" ).val().length === 0 ) {
+// Usually show some kind of error message here
+// Prevent the form from submitting
+event.preventDefault();
+} else {
+// Run $.ajax() here
+}
+});
+
Validate a phone number field
+$( "#form" ).submit(function( event ) {
+var inputtedPhoneNumber = $( "#phone" ).val();
+// Match only numbers
+var phoneNumberRegex = /^\d*$/;
+// If the phone number doesn't match the regex
+if ( !phoneNumberRegex.test( inputtedPhoneNumber ) ) {
+// Usually show some kind of error message here
+// Prevent the form from submitting
+event.preventDefault();
+} else {
+// Run $.ajax() here
+}
+});
+
Helper libraries (like Modernizr) that provide a simple, high-level API for determining if a browser has a specific feature available or not.
+ + + + + + + + + + + + + + + + +Just type "cmd" to the location bar. It will start a new command prompt in current path.
+OR
+Hold the "Shift" key while right-clicking a blank space in the desired folder to bring up a more verbose context menu. One of the options is "Open PowerShell Here".
+To re-enable the "Open Command Prompt Here" (disabled by the Windows 10 Creators Update):
+ + + + + + + + + + + + + + + + + + + +{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Va=/["'&<>]/;qn.exports=za;function za(e){var t=""+e,r=Va.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i