Downloading files in bash






















Taking the " just Bash and nothing else " strictly, here's one adaptation of earlier answers Chris's , 's that does not call any external utilities not even standard ones but also works with binary files:. We deal with NUL bytes with read -d ''. It reads until a NUL byte, and returns true if it found one, false if it didn't. Bash can't handle NUL bytes in strings, so when read returns with true, we add the NUL byte manually when printing, and when it returns false, we know there are no NUL bytes any more, and this should be the last piece of data.

Tested with Bash 4. The kB wget binary took about 5. In comparison, 's cat-solution finishes in less than 0. Not very surprising, really. This is obviously silly, since without using external utilities, there's not much we can do with the downloaded file, not even make it executable.

So you can also use SSH to upload to it. Which is functionally equivalent to downloading of software packages etc. As shown in this answer , you would execute the following on your local machine to place a file on your remote headless server:. The disadvantage of the above solution compared to downloading is lower transfer speed, since the connection with your local machine usually has much less bandwidth than the connection between your headless server and other servers. To solve that, you can of course execute the above command on another server with decent bandwidth.

To make that more comfortable avoiding a manual login on the third machine , here is a command to execute on your local machine. See the explanations below for the reason. The command will ssh to your third machine intermediate-host , start downloading a file to there via wget , and start uploading it to target-host via SSH. Downloading and uploading use the bandwidth of your intermediate-host and happen at the same time due to Bash pipe equivalents , so progress will be fast.

For the -T -e none SSH options when using it to transfer files, see these detailed explanations. This command is meant for cases where you can't use SSH's public key authentication mechanism — it still happens with some shared hosting providers, notably Host Europe. To still automate the process, we rely on sshpass to be able to supply the password in the command.

It requires sshpass to be installed on your intermediate host sudo apt-get install sshpass under Ubuntu. We try to use sshpass in a secure way, but it will still not be as secure as the SSH pubkey mechanism says man sshpass. In particular, we supply the SSH password not as a command line argument but via a file, which is replaced by bash process substitution to make sure it never exists on disk.

The printf is a bash built-in, making sure this part of the code does not pop up as a separate command in ps output as that would expose the password [ source ].

And that without using a temp file [ source ]. But no guarantees, maybe I overlooked something. Again to make the sshpass usage safe, we need to prevent the command from being recorded to the bash history on your local machine. For that, the whole command is prepended with one space character, which has this effect. Normally, SSH would then wait for user input to confirm the connection attempt.

We make it proceed anyway. So we have to rewrite the typical wget -O - … ssh … command into a form without a bash pipe, as explained here. Sign up to join this community. The best answers are voted up and rise to the top.

Active 5 years, 6 months ago. Viewed k times. Improve this question. The third bullet point on this page states that questions about "software tools commonly used by programmers" are considered on-topic. Therefore, this question, being about Linux which can be considered a tool, and is most definitely commonly used by programmers is perfectly valid.

If you disagree, please at least consider migrating the question to Server Fault. Add a comment. Active Oldest Votes. Improve this answer. Ottavio Campana 3, 5 5 gold badges 29 29 silver badges 56 56 bronze badges.

Since the question is locked and I cannot post as an answer, I will write it as a comment. Of course you can change the URL to your needs. More can be found here — Murat Aykanat. Jens Schauder Jens Schauder I have also needed to do this Linux noob. If you don't run this test, your command could return annoying error messages.

Since Amazon SES gives ugly and unreadable names to each of the messages it drops into my S3 bucket, I'll now dynamically rename them while, at the same time, moving them over to their new home in the dated directory I just created. The for As part of the same operation, I'll give it its new name, the string " email " followed by a random number generated by the rand command. You may need to install a random number generator: that'll be apt install rand on Ubuntu.

That worked fine as long as I didn't happen to receive more than one batch of messages on the same day. If I did, then the new messages would overwrite older files in that day's directory. I guess it's mathematically possible that my rand command could assign overlapping numbers to two files but, given that the default range rand uses is between 1 and 32,, that's a risk I'm willing to take.

At this point, there should be files in the new directory with names like email, email, etc. There are currently no files in tmpemails - that's because the mv command moves files to their new location, leaving nothing behind.

The final section of the script opens each new message in my favorite desktop text editor Gedit. It uses a similar for



0コメント

  • 1000 / 1000