I need to get data from several pages. Data are storied in html tables.
I want to generate SQL file which saves them to my database.
One of my aims are these results.
I could also download all needed sites using wget and processed them with Python, if it had necessary libraries to work with html.
If I correctly understood you basically have to scrape some content from the web and store it in a database.
I would probably go for a Python script which crawls the webpage by using the
urllib2 library and then parse it in some way depending on the needed content (regexp, BeautifulSoup, etc...).
Take a look at this question: Web scraping with Python