Kubernetes Redis cluster with 3 master and 3 slave.
Native Redis cluster is deployed on 3 servers. Each server hold one master and one slave.
The conclusion is the kubernetes cluster is around 30% slower than native cluster.
This is understandable since are a lot layers in kubernetes (container, kube dns...), but the benefit is kubernetes can handle the crash for us. It depends on your business logic and see choosing which one.
The primary use case is to automatically save checkpoints during and at the end of training. This way you can use a trained model without having to retrain it, or pick-up training where you left of—in case the training process was interrupted.
tf.keras.callbacks.ModelCheckpoint is a callback that performs this task. The callback takes a couple of arguments to configure checkpointing.
Error: /tensorflow/serving/.cache/_bazel_jfan/7a4a59242df6fd82e0e4108ffd6fce39/external/org_tensorflow/tensorflow/core/BUILD:2101:1: no such package '@zlib_archive//': java.io.IOException: Error downloading [https://mirror.bazel.build/zlib.net/zlib-1.2.11.tar.gz, https://zlib.net/zlib-1.2.11.tar.gz] to /tensorflow/serving/.cache/_bazel_jfan/7a4a59242df6fd82e0e4108ffd6fce39/external/zlib_archive/zlib-1.2.11.tar.gz: All mirrors are down: [sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target] and referenced by '@org_tensorflow//tensorflow/core:lib_internal_impl'
There is four types mode of OAuth, here all of our resource are for internal use, thus we use password mode.
Authorization server:
Authorization endpoint.
Token endpoint.
Resource server
We deploy authorization server and resource server separately for scalability consideration. As we have many separate resource servers, each of them have a client id and client secret.
User need to create username and password with client info. Then each time user can ask authorization server for access token with user credentials, then use access token to access according client and get resources.
Interpretation: p and q are connected if they have the same id.
Find
Check if p and q have the same id.
Time complexity: O(1)
Union
To merge components containing p and q, change all entries with id[p] to id[q].
Time complexity: O(N)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
publicclassQuickFind{ privateint[] id; } publicQuickFind(int N){ id = newint[N]; for (inti=0; i < N; i++) id[i] = I; //set id of each object to itself } publicbooleanfind(int p, int q){ return id[p] == id[q]; } publicvoidunion(int p, int q){ intpid= id[p]; for (inti=0; i < id.length; i++) if (id[i] == pid) id[i] = id[q]; } }
2.Quick Union
Data structure.
Integer array id[] of size N.
Interpretation: id[i] is parent of i.
Root of i is id[id[id[...id[i]...]]].
Find
Check if p and q have the same root.
Time complexity: O(N)
Union
Set the id of q's root to the id of p's root.
Time complexity: O(N*)
In theory it should be less than quick find because we don't need to go through the entire list.
Kafka is a distributed publish-subscribe messaging system. Publish-subscribe refers to a pattern of communication in distributed systems where the producers/publishers of data produce data categorized in different classes without any knowledge of how the data will be used by the subscribers. The consumers/subscribers can express interest in specific classes of data and receive only those messages. Kafka uses a commit log to persist data. The commit log is an ordered, immutable, append-only data structure that is the main abstract data structure that Kafka manages. The main advantage of Kafka is the fact that it provides a unifying data backbone from which all systems in the organization can consume data independently and reliably.
Topics
Topics represent a user-defined category to which messages are published. An example topic one might find at an advertising company could be AdClickEvents. All consumers of data read from one or more topics. Topics are generally maintained as a partitioned log (see below).
Producers
Producers are processes that publish messages to one or more topics in the Kafka cluster.
Consumers
Consumers are processes that subscribe to topics and read messages from the Kafka cluster.
Partitions
Topics are divided into partitions. A partition represents the unit of parallelism in Kafka. In general, a higher number of partitions means higher throughput. Within each partition each message has a specific offset that consumers use to keep track of how far they have progressed through the stream. Consumers may use Kafka partitions as semantic partitions as well.
Brokers
Brokers in Kafka are responsible for message persistence and replication. Producers talk to brokers to publish messages to the Kafka cluster and consumers talk to brokers to consume messages from the Kafka cluster.
Replication
Kakfa uses replication for fault-tolerance. For each partition of a Kafka topic, Kafka will choose a leader and zero or more followers from servers in the cluster and replicates the log across them. The number of replicas including the leader is determined by a replication factor. The leader of the partition will handle all the reads and writes while the followers consume messages from the leader in order to replicate the log. Since both leader and follower may fail when a server in the cluster fails, the leader keeps track of alive followers (in-sync replicas (ISR)) and removes unhealthy ones. In this case, if the leader dies, an alive follower will become the new leader of this partition. This mechanism allows Kafka to remain functional when failures exist.
Given an array S of n integers, are there elements a, b, c in S such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero.
Solution: Brute force would be O(N^3).
In whatever way we must traverse firstly to ensure one number, then the problem would be two sum.