Article From:

I used a Python crawler to crawl through the data on the beans, but I couldn’t write it to CSV (blank space in the CSV file)…
The following is the callback to write the data class:

class ScrapeCallback:
    def __init__(self):
        self.writer = csv.writer(open('countries.csv', 'w', newline=''))
        self.fields = ('name', 'year', 'score')

    def __call__(self, url, html):
        csslist = ['span[property = "v:itemreviewed"]', 'span.year', 'strong[property="v:average"]']
            tree = lxml.html.fromstring(html)
            row = [tree.cssselect('{0}'.format(field))[0].text for field in csslist]
            #Write to CSVSelf.writer.writerow (row)Print (URL, row)Except Exception asE:Print ("ScrapeCallback error:", e)

printThe statement is normally displayed, indicating that the function is not wrong. Thank God for finding the wrong reason. Thank you very much.
Attached source code

Answer 0:

The problem has been solved.
Reason: forced termination of the process caused the file not to close properly.

Leave a Reply

Your email address will not be published. Required fields are marked *