So I have dumped our web app's (in Amazon EC2) file in the /var/www directory and the SQL dump. Now, I want to download them into a virtual machine in our HPC, located here in the building, so I can scrutinize the SQL dump to have an in-place substitution of hard-coded IP addresses for some entries in our database. There is some defect in Tripal toolkit - what happens is that when I add say, InterPro analysis results, the links to the dedicated pages for such from feature pages exhibit the domain name of the page or web-app in general - hard-coded! So formerly, we were using 220.127.116.11 as our IP address but have to change to 54.***.****.***. Therefore - ka-blam! Links cannot be found.
Now, I noticed that the tarball for the physical files hover at around 2.7GiB, way way bigger and very unusual than the 1.7GiB for the SQL dump. I realized, there was some folder within where I placed genomic downloads, and those are not just symbolic links. With what I aim to do, I could drop that folder first before downloading to local HPC. But I was not able to successfully install a UI for the EC2 instance so I am accessing it only via SSH, therefore the dreams of using the likes of WinZip or WinRAR UI to delete a folder from the archive is not possible.
Thankfully, a little Googling brought up The Linux's Daily and their article saves the day.
[[Reblogged]] Remove a File / Directory from a Tarball Without Extracting First