I'd like to programatically save [in c++ or php] to a file the source code of the web page that I see in my browser (I'm using Mozilla under Windows). When the content of the web page physically exists as a file on the server, it can easily be done using e.g. the program curl or the php library 'curl.' However, I'm not familiar with these two methods and the standard way I'm using them fails to work for web pages obtained by addresses e.g. of the form:
So, I mean web pages that are produced by some php file which uses parameters - if only we change them, we see different content of the web page. Is it possible to write a code in c++ or php (or a command line parameters for the program curl) which will be responsible for copying the source code of such web pages into a file or a variable?
Thanks for any help.
The specific web page I'm testing is
In curl program I'm just testing the command line
curl -O url_address
which for the above web page returns ``the error page.''
$webpage = shell_exec("lynx -source 'http://XXXX'");
which will take a single page and echo it into the
I also know you can do with it with a simple
$webpage = file_get_contents("http://xxxxx");
as well but that doesn't support any special features.
Edit (file_get_contents doesn't appear to work with his requested URL, lynx does however)