Posts

Edit Distance Problem

/* ==========  ========== ========== ========= ======== ===========*/ //            CF - Edit Distance Problem                                                                               // //       Solution Code using  DP                                     // //            Author - Piyush Jain                                                                                            // //                                                                                                                                         // /* ========== ========== ========== ==========  ========== ======   */ //Note:- We are converting  string s into t and assuming and every operaion (INSERT,DELETE and REPLACEMENT took ONE unit cost) // dp[i-1][j] => stand for insert operation in "s" string // dp[i][j-1] => stand for delete operation in "s" string // dp[i-1][j-1] => replacement of i-1th character in "s" string via j-1th character of stri

Enable logging in Haproxy.

HAProxy Log Setup HAProxy wants to log.   First HAProxy does not log directly into a file due to performance reason. So we need to handle  that with syslog server. But Haproxy also requires a syslog to listen on UDP port ( which in default syslog/rsyslog installation is not enabled ).   Basically log enable format in haproxy cfg is  “log <address> <facility> [max level [min level]]” Note:-  Adds a global syslog server. Up to two global servers can be defined. They will receive logs for startups and exits, as well as all logs from proxies configured        with "log global”.    <address> can be one of:        - An IPv4 address optionally followed by a colon and a UDP port. If no port is specified, 514 is used by default (the standard syslog port).        - A filesystem path to a UNIX domain socket, keeping in mind considerations for chroot (be sure the path is accessible inside the chroot) and uid/gid (be sure the path is appropriately

Kruskal Algorithm.

//No one born perfect :) //Kruskal Algorithm //Finding MST (Minimum Spanning Tree) //Greedy Algo //O(ElogE or ElogV) //DS use DisJoint DS (Union Find)(very important DS) //How it is work ? (We are considering that Graph is connected ) // a). First Sort all edge in increasing order according to weight // b). Now Iterate each edge(u,v) in Edges and check whether these // vertax u and v are in same tree or not (which we are finding using union and find method) // if not include this edge otherwise continue (For more understanding see source code) //  c). You will get a MST in end. //For proof go wiki page:- // https://en.wikipedia.org/wiki/Kruskal%27s_algorithm#Proof_of_correctness // Happy Coding. ///////////////////////////////////////////////////////////////////////////////////////////// #include <iostream> #include <cstdio> #include <list> #include <cstring> using namespace std; // vector<vector <long int,long int>

Amazon S3 Storage Class

                                         Amazon S3 Storage Class Infrequent Access Amazon S3 Standard - Infrequent Access:- Key Features:- low latency and high throughput performance Designed for durability of 99.999999999% of objects Designed for 99.9% availability over a given year Backed with the Amazon S3 Service Level Agreement for availability Supports SSL encryption of data in transit and at rest Lifecycle management for automatic migration of objects Standard - IA (Infrequent Access) offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee. It is Storage Class of when we require access data then it provide rapid access. This combination of low cost and high performance make Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the object level and can exist in the same bucket as S

Delete S3 Bucket.

There are four method which AWS mentioned in their Doc. Delete From Console. Use third party tool or AWS CLI. Use lifecycle policy.           i)   Delete From Console :- Deleting from console only work when bucket contain less than 100,000 objects but our bucket contain more than 100,000 object.           ii)    Use third party tool or AWS CLI :-  Can use s3cmd for deleting bucket which is a third party tool. We use AWS CLI provided by AWS.                        Run  this command  e.g:-                            AWS s3 rb s3://<bucket-name>  --force                             AWS s3 rm s3://<bucket-name>  --recursive                 The first command is used for delete object and as well as bucket also.  It create delete marker object if we have versioning enable for bucket. So only use this command when bucket doesn't have versioning enable because it delete only current version, not previous version and create delete marker file/objec

Take AWS ec2 volume instance snapshot

This post is just related to how can you take snapshot/backup of your AWS  ec2 instance's volume using boto. Two method is their:- (which we can use) 1. Cron Method 2. Check time difference between last snapshot creation time and current time. 1. Cron method:- Two cron method (C1 ,C2 two cron)                     i)C1 runs every hour, 5th minute                        Picks up the current hour, say CHOUR                        Pick up volumes defined with backup in hours(2_hour,4_hour ... etc)                         Take 2,4 as frequency accordingly                        if(CHOUR%freq)==0 Take snapshot                      ii)C2 runs every day, 12th hour                        Pick up the current day, say CDAY                        Pick up volumes defined with backup in days(1_day,7_day ... etc)                        Take 1,7 as frequency accordingly                        if(CDAY%freq)==0 Take snapshot Pseudo Code:-           i) Make co