Posts

Showing posts with the label Backup

Backup AWS Dynamodb To S3

Answer : With introduction of AWS Data Pipeline, with a ready made template for dynamodb to S3 backup, the easiest way is to schedule a back up in the Data Pipeline [link], In case you have special needs (data transformation, very fine grain control ...) consider the answer by @greg There are some good guides for working with MapReduce and DynamoDB. I followed this one the other day and got data exporting to S3 going reasonably painlessly. I think your best bet would be to create a hive script that performs the backup task, save it in an S3 bucket, then use the AWS API for your language to pragmatically spin up a new EMR job flow, complete the backup. You could set this as a cron job. Example of a hive script exporting data from Dynamo to S3: CREATE EXTERNAL TABLE my_table_dynamodb ( company_id string ,id string ,name string ,city string ,state string ,postal_code string) STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler' TB...

Can I Rsync To Multiple Destinations Using Same Filelist?

Answer : Solution 1: Here is the information from the man page for rsync about batch mode. BATCH MODE Batch mode can be used to apply the same set of updates to many identical systems. Suppose one has a tree which is replicated on a number of hosts. Now suppose some changes have been made to this source tree and those changes need to be propagated to the other hosts. In order to do this using batch mode, rsync is run with the write-batch option to apply the changes made to the source tree to one of the destination trees. The write-batch option causes the rsync client to store in a "batch file" all the information needed to repeat this operation against other, identical destination trees. Generating the batch file once saves having to perform the file status, checksum, and data block generation more than once when updating multiple destination trees. Multicast transport protocols can be used to transfer the batc...

Btrfs Snapshot To Non-btrfs Disk. Encryption, Read Acess

Answer : I will just add to Gilles' answer by saying that although you may use “ cp , rsync , etc.” to transfer your read-only subvolumes / snapshots, you may also send and store the subvolumes as btrfs streams using the btrfs send command. The btrfs Wiki mentions the following use: # btrfs subvolume snapshot -r / /my/snapshot-YYYY-MM-DD && sync # btrfs send /my/snapshot-YYYY-MM-DD | ssh user@host btrfs receive /my/backups # btrfs subvolume snapshot -r / /my/incremental-snapshot-YYYY-MM-DD && sync # btrfs send -p /my/snapshot-YYYY-MM-DD /my/incremental-snapshot-YYYY-MM-DD | ssh user@host btrfs receive /backup/home but you may also just save the streams for future use: # btrfs subvolume snapshot -r / /my/snapshot-YYYY-MM-DD && sync # btrfs send /my/snapshot-YYYY-MM-DD | ssh user@host 'cat >/backup/home/snapshot-YYYY-MM-DD.btrfs' # btrfs subvolume snapshot -r / /my/incremental-snapshot-YYYY-MM-DD && sync # btrfs send -p /my...

Amazon RDS Instance Backup Window Duration?

Answer : Backup window doesn't specifically ask for the time when to start take backup but instead ask for time period in which aws can trigger backup. So basically it's asking for backup window time . That's why it has 2 fields 1. StartTime: When can the process of backup be started. 2. Duration : time window in which process must start to take backup. E.G if I set start time: 5:30 and duration: 30mints Backup can start at anytime between 5:30 to 6:00 . from Working With Backups documentaion Below is the answer for "what if the backup did not fit into a backup window?" If the backup requires more time than allotted to the backup window, the backup continues after the window ends, until it finishes. Below is the answer for "If backup may not fit into the backup window, why do we need a backup window?" The backup window can't overlap with the weekly maintenance window for the DB instance.

Can I Restore A Single Table From A Full Mysql Mysqldump File?

Answer : You can try to use sed in order to extract only the table you want. Let say the name of your table is mytable and the file mysql.dump is the file containing your huge dump: $ sed -n -e '/CREATE TABLE.*`mytable`/,/Table structure for table/p' mysql.dump > mytable.dump This will copy in the file mytable.dump what is located between CREATE TABLE mytable and the next CREATE TABLE corresponding to the next table. You can then adjust the file mytable.dump which contains the structure of the table mytable , and the data (a list of INSERT ). I used a modified version of uloBasEI's sed command. It includes the preceding DROP command, and reads until mysql is done dumping data to your table (UNLOCK). Worked for me (re)importing wp_users to a bunch of Wordpress sites. sed -n -e '/DROP TABLE.*`mytable`/,/UNLOCK TABLES/p' mydump.sql > tabledump.sql This can be done more easily? This is how I did it: Create a temporary database (e.g. restore): ...