binary | hash alg | hash |
---|---|---|
kubernetes.tar.gz | md5 | c0ce9e6150e9d7a19455db82f3318b4c |
kubernetes.tar.gz | sha1 | 52dd998e1191f464f581a9b87017d70ce0b058d9 |
- Significant scale improvements. Increased cluster scale by 400% to 1000 nodes with 30,000 pods per cluster. Kubelet supports 100 pods per node with 4x reduced system overhead.
- Simplified application deployment and management.
- Dynamic Configuration (ConfigMap API in the core API group) enables application configuration to be stored as a Kubernetes API object and pulled dynamically on container startup, as an alternative to baking in command-line flags when a container is built.
- Turnkey Deployments (Deployment API (Beta) in the Extensions API group) automate deployment and rolling updates of applications, specified declaratively. It handles versioning, multiple simultaneous rollouts, aggregating status across all pods, maintaining application availability, and rollback.
- Automated cluster management:
- Kubernetes clusters can now span zones within a cloud provider. Pods from a service will be automatically spread across zones, enabling applications to tolerate zone failure.
- Simplified way to run a container on every node (DaemonSet API (Beta) in the Extensions API group): Kubernetes can schedule a service (such as a logging agent) that runs one, and only one, pod per node.
- TLS and L7 support (Ingress API (Beta) in the Extensions API group): Kubernetes is now easier to integrate into custom networking environments by supporting TLS for secure communication and L7 http-based traffic routing.
- Graceful Node Shutdown (aka drain) - The new “kubectl drain” command gracefully evicts pods from nodes in preparation for disruptive operations like kernel upgrades or maintenance.
- Custom Metrics for Autoscaling (HorizontalPodAutoscaler API in the Autoscaling API group): The Horizontal Pod Autoscaling feature now supports custom metrics (Alpha), allowing you to specify application-level metrics and thresholds to trigger scaling up and down the number of pods in your application.
- New GUI (dashboard) allows you to get started quickly and enables the same functionality found in the CLI as a more approachable and discoverable way of interacting with the system. Note: the GUI is enabled by default in 1.2 clusters.
- Job was Beta in 1.1 and is GA in 1.2 .
apiVersion: batch/v1
is now available. You now do not need to specify the.spec.selector
field — a unique selector is automatically generated for you.- The previous version,
apiVersion: extensions/v1beta1
, is still supported. Even if you roll back to 1.1, the objects created using the new apiVersion will still be accessible, using the old version. You can continue to use your existing JSON and YAML files until you are ready to switch tobatch/v1
. We may remove support for Jobs withapiVersion: extensions/v1beta1
in 1.3 or 1.4.
- HorizontalPodAutoscaler was Beta in 1.1 and is GA in 1.2 .
apiVersion: autoscaling/v1
is now available. Changes in this version are:- Field CPUUtilization which was a nested structure CPUTargetUtilization in HorizontalPodAutoscalerSpec was replaced by TargetCPUUtilizationPercentage which is an integer.
- ScaleRef of type SubresourceReference in HorizontalPodAutoscalerSpec which referred to scale subresource of the resource being scaled was replaced by ScaleTargetRef which points just to the resource being scaled.
- In extensions/v1beta1 if CPUUtilization in HorizontalPodAutoscalerSpec was not specified it was set to 80 by default while in autoscaling/v1 HPA object without TargetCPUUtilizationPercentage specified is a valid object. Pod autoscaler controller will apply a default scaling policy in this case which is equivalent to the previous one but may change in the future.
- The previous version,
apiVersion: extensions/v1beta1
, is still supported. Even if you roll back to 1.1, the objects created using the new apiVersions will still be accessible, using the old version. You can continue to use your existing JSON and YAML files until you are ready to switch toautoscaling/v1
. We may remove support for HorizontalPodAutoscalers withapiVersion: extensions/v1beta1
in 1.3 or 1.4.
- Kube-Proxy now defaults to an iptables-based proxy. If the --proxy-mode flag is specified while starting kube-proxy (‘userspace’ or ‘iptables’), the flag value will be respected. If the flag value is not specified, the kube-proxy respects the Node object annotation: ‘net.beta.kubernetes.io/proxy-mode’. If the annotation is not specified, then ‘iptables’ mode is the default. If kube-proxy is unable to start in iptables mode because system requirements are not met (kernel or iptables versions are insufficient), the kube-proxy will fall-back to userspace mode. Kube-proxy is much more performant and less resource-intensive in ‘iptables’ mode.
- Node stability can be improved by reserving resources for the base operating system using --system-reserved and --kube-reserved Kubelet flags
- Liveness and readiness probes now support more configuration parameters: periodSeconds, successThreshold, failureThreshold
- The new ReplicaSet API (Beta) in the Extensions API group is similar to ReplicationController, but its selector is more general (supports set-based selector; whereas ReplicationController only supports equality-based selector).
- Scale subresource support is now expanded to ReplicaSets along with ReplicationControllers and Deployments. Scale now supports two different types of selectors to accommodate both equality-based selectors supported by ReplicationControllers and set-based selectors supported by Deployments and ReplicaSets.
- “kubectl run” now produces Deployments (instead of ReplicationControllers) and Jobs (instead of Pods) by default.
- Pods can now consume Secret data in environment variables and inject those environment variables into a container’s command-line args.
- Stable version of Heapster which scales up to 1000 nodes: more metrics, reduced latency, reduced cpu/memory consumption (~4mb per monitored node).
- Pods now have a security context which allows users to specify:
- attributes which apply to the whole pod:
- User ID
- Whether all containers should be non-root
- Supplemental Groups
- FSGroup - a special supplemental group
- SELinux options
- If a pod defines an FSGroup, that Pod’s system (emptyDir, secret, configMap, etc) volumes and block-device volumes will be owned by the FSGroup, and each container in the pod will run with the FSGroup as a supplemental group
- attributes which apply to the whole pod:
- Volumes that support SELinux labelling are now automatically relabeled with the Pod’s SELinux context, if specified
- A stable client library release_1_2 is added. The library is here, and detailed doc is here. We will keep the interface of this go client stable.
- New Azure File Service Volume Plugin enables mounting Microsoft Azure File Volumes (SMB 2.1 and 3.0) into a Pod. See example for details.
- Logs usage and root filesystem usage of a container, volumes usage of a pod and node disk usage are exposed through Kubelet new metrics API.
- Dynamic Provisioning of PersistentVolumes: Kubernetes previously required all volumes to be manually provisioned by a cluster administrator before use. With this feature, volume plugins that support it (GCE PD, AWS EBS, and Cinder) can automatically provision a PersistentVolume to bind to an unfulfilled PersistentVolumeClaim.
- Run multiple schedulers in parallel, e.g. one or more custom schedulers alongside the default Kubernetes scheduler, using pod annotations to select among the schedulers for each pod. Documentation is here, design doc is here.
- More expressive node affinity syntax, and support for “soft” node affinity.
Node selectors (to constrain pods to schedule on a subset of nodes) now support
the operators {
In, NotIn, Exists, DoesNotExist, Gt, Lt
} instead of just conjunction of exact match on node label values. In addition, we’ve introduced a new “soft” kind of node selector that is just a hint to the scheduler; the scheduler will try to satisfy these requests but it does not guarantee they will be satisfied. Both the “hard” and “soft” variants of node affinity use the new syntax. Documentation is here (see section “Alpha feature in Kubernetes v1.2: Node Affinity“). Design doc is here. - A pod can specify its own Hostname and Subdomain via annotations (
pod.beta.kubernetes.io/hostname, pod.beta.kubernetes.io/subdomain)
. If the Subdomain matches the name of a headless service in the same namespace, a DNS A record is also created for the pod’s FQDN. More details can be found in the DNS README. Changes were introduced in PR #20688. - New SchedulerExtender enables users to implement custom out-of-(the-scheduler)-process scheduling predicates and priority functions, for example to schedule pods based on resources that are not directly managed by Kubernetes. Changes were introduced in PR #13580. Example configuration and documentation is available here. This is an alpha feature and may not be supported in its current form at beta or GA.
- New Flex Volume Plugin enables users to use out-of-process volume plugins that are installed to “/usr/libexec/kubernetes/kubelet-plugins/volume/exec/” on every node, instead of being compiled into the Kubernetes binary. See example for details.
- vendor volumes into a pod. It expects vendor drivers are installed in the volume plugin path on each kubelet node. This is an alpha feature and may change in future.
- Kubelet exposes a new Alpha metrics API - /stats/summary in a user friendly format with reduced system overhead. The measurement is done in PR #22542.
- Docker v1.9.1 is officially recommended. Docker v1.8.3 and Docker v1.10 are supported. If you are using an older release of Docker, please upgrade. Known issues with Docker 1.9.1 can be found below.
- CPU hardcapping will be enabled by default for containers with CPU limit set, if supported by the kernel. You should either adjust your CPU limit, or set CPU request only, if you want to avoid hardcapping. If the kernel does not support CPU Quota, NodeStatus will contain a warning indicating that CPU Limits cannot be enforced.
- The following applies only if you use the Go language client (
/pkg/client/unversioned
) to create Job by defining Go variables of type "k8s.io/kubernetes/pkg/apis/extensions".Job
). We think this is not common, so if you are not sure what this means, you probably aren't doing this. If you do this, then, at the time you re-vendor the "k8s.io/kubernetes/"
code, you will need to setjob.Spec.ManualSelector = true
, or else setjob.Spec.Selector = nil.
Otherwise, the jobs you create may be rejected. See Specifying your own pod selector. - Deployment was Alpha in 1.1 (though it had apiVersion extensions/v1beta1) and
was disabled by default. Due to some non-backward-compatible API changes, any
Deployment objects you created in 1.1 won’t work with in the 1.2 release.
- Before upgrading to 1.2, delete all Deployment alpha-version resources, including the Replication Controllers and Pods the Deployment manages. Then create Deployment Beta resources after upgrading to 1.2. Not deleting the Deployment objects may cause the deployment controller to mistakenly match other pods and delete them, due to the selector API change.
- Client (kubectl) and server versions must match (both 1.1 or both 1.2) for any Deployment-related operations.
- Behavior change:
- Deployment creates ReplicaSets instead of ReplicationControllers.
- Scale subresource now has a new
targetSelector
field in its status. This field supports the new set-based selectors supported by Deployments, but in a serialized format.
- Spec change:
- Deployment’s selector is now more general (supports set-based selector; it only supported equality-based selector in 1.1).
- .spec.uniqueLabelKey is removed -- users can’t customize unique label key -- and its default value is changed from “deployment.kubernetes.io/podTemplateHash” to “pod-template-hash”.
- .spec.strategy.rollingUpdate.minReadySeconds is moved to .spec.minReadySeconds
- DaemonSet was Alpha in 1.1 (though it had apiVersion extensions/v1beta1) and
was disabled by default. Due to some non-backward-compatible API changes, any
DaemonSet objects you created in 1.1 won’t work with in the 1.2 release.
- Before upgrading to 1.2, delete all DaemonSet alpha-version resources. If you do not want to disrupt the pods, use kubectl delete daemonset --cascade=false. Then create DaemonSet Beta resources after upgrading to 1.2.
- Client (kubectl) and server versions must match (both 1.1 or both 1.2) for any DaemonSet-related operations.
- Behavior change:
- DaemonSet pods will be created on nodes with .spec.unschedulable=true and will not be evicted from nodes whose Ready condition is false.
- Updates to the pod template are now permitted. To perform a rolling update of a DaemonSet, update the pod template and then delete its pods one by one; they will be replaced using the updated template.
- Spec change:
- DaemonSet’s selector is now more general (supports set-based selector; it only supported equality-based selector in 1.1).
- Running against a secured etcd requires these flags to be passed to
kube-apiserver (instead of --etcd-config):
- --etcd-certfile, --etcd-keyfile (if using client cert auth)
- --etcd-cafile (if not using system roots)
- As part of preparation in 1.2 for adding support for protocol buffers (and the
direct YAML support in the API available today), the Content-Type and Accept
headers are now properly handled as per the HTTP spec. As a consequence, if
you had a client that was sending an invalid Content-Type or Accept header to
the API, in 1.2 you will either receive a 415 or 406 error.
The only client
this is known to affect is curl when you use -d with JSON but don't set a
content type, helpfully sends "application/x-www-urlencoded", which is not
correct.
Other client authors should double check that you are sending proper
accept and content type headers, or set no value (in which case JSON is the
default).
An example using curl:
curl -H "Content-Type: application/json" -XPOST -d '{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://127.0.0.1:8080/api/v1/namespaces"
- The version of InfluxDB is bumped from 0.8 to 0.9 which means storage schema change. More details here.
- We have renamed “minions” to “nodes”. If you were specifying NUM_MINIONS or MINION_SIZE to kube-up, you should now specify NUM_NODES or NODE_SIZE.
- Paused deployments can't be resized and don't clean up old ReplicaSets.
- Minimum memory limit is 4MB. This is a docker limitation
- Minimum CPU limits is 10m. This is a Linux Kernel limitation
- “kubectl rollout undo” (i.e. rollback) will hang on paused deployments, because paused deployments can’t be rolled back (this is expected), and the command waits for rollback events to return the result. Users should use “kubectl rollout resume” to resume a deployment before rolling back.
- “kubectl edit ” will open the editor multiple times, once for each resource in the list.
- If you create HPA object using autoscaling/v1 API without specifying targetCPUUtilizationPercentage and read it using kubectl it will print default value as specified in extensions/v1beta1 (see details in #23196).
- If a node or kubelet crashes with a volume attached, the volume will remain attached to that node. If that volume can only be attached to one node at a time (GCE PDs attached in RW mode, for example), then the volume must be manually detached before Kubernetes can attach it to other nodes.
- If a volume is already attached to a node any subsequent attempts to attach it again (due to kubelet restart, for example) will fail. The volume must either be manually detached first or the pods referencing it deleted (which would trigger automatic volume detach).
- In very large clusters it may happen that a few nodes won’t register in API server in a given timeframe for whatever reasons (networking issue, machine failure, etc.). Normally when kube-up script will encounter even one NotReady node it will fail, even though the cluster most likely will be working. We added an environmental variable to kube-up ALLOWED_NOTREADY_NODES that defines the number of nodes that if not Ready in time won’t cause kube-up failure.
- “kubectl rolling-update” only supports Replication Controllers (it doesn’t support Replica Sets). It’s recommended to use Deployment 1.2 with “kubectl rollout” commands instead, if you want to rolling update Replica Sets.
- When live upgrading Kubelet to 1.2 without draining the pods running on the node, the containers will be restarted by Kubelet (see details in #23104).
- Listing containers can be slow at times which will affect kubelet performance. More information here
- Docker daemon restarts can fail. Docker checkpoints have to deleted between restarts. More information here
- Pod IP allocation-related issues. Deleting the docker checkpoint prior to restarting the daemon alleviates this issue, but hasn’t been verified to completely eliminate the IP allocation issue. More information here
- Daemon becomes unresponsive (rarely) due to kernel deadlocks. More information here
Core changes:
- Support for load balancers with source ranges
Core changes:
- Support for ELBs with complex configurations: better subnet selection with multiple subnets, and internal ELBs
- Support for VPCs with private dns names
- Multiple fixes to EBS volume mounting code for robustness, and to support mounting the full number of AWS recommended volumes.
- Multiple fixes to avoid hitting AWS rate limits, and to throttle if we do
- Support for the EC2 Container Registry (currently in us-east-1 only)
With kube-up:
- Automatically install updates on boot & reboot
- Use optimized image based on Jessie by default
- Add support for Ubuntu Wily
- Master is configured with automatic restart-on-failure, via CloudWatch
- Bootstrap reworked to be more similar to GCE; better supports reboots/restarts
- Use an elastic IP for the master by default
- Experimental support for node spot instances (set NODE_SPOT_PRICE=0.05)
- Ubuntu Trusty support added
(Linked github releases 1.1.2 to 1.2.0-beta-1 that are part of 1.2.0)
- v1.1.2
- v1.1.3
- v1.1.4
- v1.1.7
- v1.1.8
- v1.2.0-alpha.4
- v1.2.0-alpha.5
- v1.2.0-alpha.6
- v1.2.0-alpha.7
- v1.2.0-alpha.8
- v1.2.0-beta.0
- v1.2.0-beta.1
Please see the Releases Page for older releases.