I have a cronjob that was specified like this :
0 * * * * root bash /data/daily.sh
Inside this daily.sh is ->
/data/get.sh https://www.xxxxxxx.com/ccc/ 0
As you can see, get.sh take two arguments, the first URL and the recursive depth. The script will call another get.sh with incremented depth counter and different url which is scrapped from the first run result and stop until it reaches certain depth.
Inside the get.sh, I am scrapping a website with this command
wget -O- $1 > main.htm
The problem is, main.htm is not created when this script is run via crontab. The log is saying it is saved to 'STDOUT', while when I manually run it it will save to 'main.htm'. How to solve this?