添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I'm using Pytrends to extract Google trends data, like:

from pytrends.request import TrendReq
pytrend = TrendReq()
pytrend.build_payload(kw_list=['bitcoin'], cat=0, timeframe=from_date+' '+today_date)

And it returns an error:

ResponseError: The request failed: Google returned a response with code 429.

I made it yesterday and for some reason it doesn't work now! The source code from github failed too:

pytrends = TrendReq(hl='en-US', tz=360, proxies = {'https': 'https://34.203.233.13:80'})

How can I fix this? Thanks a lot!

Google will block your ip pretty fast if they suspect you spamming or scraping or any other kind of abuse of the system – Druta Ruslan May 28, 2018 at 18:16 @zimdero Thanks, what can I do if I still wanna use it? The "proxies" command doesn't work. – WWH98932 May 28, 2018 at 18:20 I am actively trying to solve this same exact problem. Im doing research for an Academic paper and it was working yesterday and today I havent been able to get one successful response. Looking forward to finding a good solution... – lopezdp May 28, 2018 at 23:09 @WWH98932 What are you working on? I am making the same exact search. Im thinking we're going to have to download the csv file from the web interface and then bring it in to JupyterLab and into a DataFrame. That's the approach I am leaning towards anyway. – lopezdp May 28, 2018 at 23:37 @lopezdp Same, I'm doing it manually, it's really annoying. I'll go somewhere else tomorrow to see if it can change my IP address... – WWH98932 May 28, 2018 at 23:53

TLDR; I solved the problem with a custom patch

Explanation

The problem comes from the Google bot recognition system. As other similar systems do, it stops serving too frequent requests coming from suspicious clients. Some of the features used to recognize trustworthy clients are the presence of specific headers generated by the javascript code present on the web pages. Unfortunately, the python requests library does not provide such a level of camouflage against those bot recognition systems since javascript code is not even executed. So the idea behind my patch is to leverage the headers generated by my browser interacting with google trends. Those headers are generated by the browser meanwhile I am logged in using my Google account, in other words, those headers are linked with my google account, so for them, I am trustworthy.

Solution

I solved in the following way:

  • First of all you must use google trends from your web browser while you are logged in with your Google Account;
  • In order to track the actual HTTP GET made: (I am using Chromium) Go into "More Tools" -> "Developers Tools" -> "Network" tab.
  • Visit the Google Trend page and perform a search for a trend; it will trigger a lot of HTTP requests on the left sidebar of the "Network" tab;
  • Identify the GET request (in my case it was /trends/explore?q=topic&geo=US) and right-click on it and select Copy -> Copy as cURL;
  • Then go to this page and paste the cURL script on the left side and copy the "headers" dictionary you can find inside the python script generated on the right side of the page;
  • Then go to your code and subclass the TrendReq class, so you can pass the custom header just copied:
  • from pytrends.request import TrendReq as UTrendReq
    GET_METHOD='get'
    import requests
    headers = {
    class TrendReq(UTrendReq):
        def _get_data(self, url, method=GET_METHOD, trim_chars=0, **kwargs):
            return super()._get_data(url, method=GET_METHOD, trim_chars=trim_chars, headers=headers, **kwargs)
    
  • Remove any "import TrendReq" from your code since now it will use this you just created;
  • Retry again;
  • If in any future the error message comes back: repeat the procedure. You need to update the header dictionary with fresh values and it may trigger the captcha mechanism.
  • Is it possible to use the requests_args argument at TrendReq to add the headers? Or do you need to edit the source? – GregOliveira Oct 18, 2022 at 8:49 As far as I remember, is not possible to pass the header as argument of TrendReq's constructor. So, I applied this custom patch by overriding the _get_data method. Yes, I would need to edit the source, having the header passed in the init or directly during the actual call. – Antonio Ercole De Luca Oct 18, 2022 at 14:46 Looks like Google is blocking by IP. Because I did this headers change, but the blocking (HTTPS error 429) remained. – bl79 Jan 8 at 6:12

    This one took a while but it turned out the library just needed an update. You can check out a few of the approaches I posted here, both of which resulted in Status 429 Responses:

    https://github.com/GeneralMills/pytrends/issues/243

    Ultimately, I was able to get it working again by running the following command from my bash prompt:

    pip install --upgrade --user git+https://github.com/GeneralMills/pytrends

    For the latest version.

    Hope that works for you too.

    EDIT:

    If you can't upgrade from source you may have some luck with:

    pip install pytrends --upgrade

    Also, make sure you're running git as an administrator if on Windows.

    umm yes. This works for me. It sounds like you do not have git installed or your PATH variable is not set to accept the git command. You should have git + python3 installed at least that's my environment. I upgraded directly from their source. – lopezdp Jun 7, 2018 at 4:13 pip3.6 install --upgrade --user git+github.com/GeneralMills/pytrends forced update from 4.4 to 4.5 Thank you! – Al Po Jan 3, 2019 at 7:05 @qpaycm lol! the struggle is real! I remember that particular problem being quite troublesome for me too back then... Here is the project that I used that library on in case it can help to give you any insight: github.com/lopezdp/MachineLearningResearch/blob/master/… – lopezdp Jan 4, 2019 at 1:44 Thanks to you finished my research in few hours. Today 2nd run had to proxy after experiencing 429. Now I think with a proxy carousel there will be a higher chance of getting a Max retries exceeded error. Guess I should handle it from > exceptions.py ? – Al Po Jan 4, 2019 at 16:21 @qpaycm It depends on what you are doing. I did experience the same as well after correcting the issues as stated. What I had to do was play around with the qty of requests I was making per iteration of my loop. In my specific case I was querying 60 day periods and I just switched it to 90 periods to minimize the total number of iteration cycles to stay under the 429 limit. I suspect you may have something similar where you can just just tweak the size of the request volume in order to reduce the total request iterations a bit... thats what I did anyway. – lopezdp Jan 4, 2019 at 19:55

    I had the same problem even after updating the module with pip install --upgrade --user git+https://github.com/GeneralMills/pytrends and restart python.

    But, the issue was solved via the below method:

    Instead of

    pytrends = TrendReq(hl='en-US', tz=360, timeout=(10,25), proxies=['https://34.203.233.13:80',], retries=2, backoff_factor=0.1, requests_args={'verify':False})
    

    Just ran:

    pytrend = TrendReq()
    

    Hope this can be helpful!

    I was having the same issue and did something really similar to Antonio Ercole De Luca. For me, however, the issue was with the cookies and not the headers.

    I created a subclass like Antonio did, but this time modifying the cookie method:

    cookies = {
        "SEARCH_SAMESITE": "####",
        "SID": "####",
    class CookieTrendReq(TrendReq):
        def GetGoogleCookie(self):
            return dict(filter(lambda i: i[0] == 'NID', cookies.items()))
    

    And I used the same method to get the cookies as he did to get the headers:

  • visit trends.google.com
  • open developer tools and go to the network tab
  • make a search, and then right-click on the top GET request (should look like explore?q=...)
  • copy the request as bash-cURL
  • paste this into curlconverter.com and get the cookies!
  • For those using this approach: if it stops working after a few tries, attempt this: github.com/GeneralMills/pytrends/pull/… (make sure to uncomment the 'cookie' parameter in your headers dictionary when passing it in as a variable to requests_args of TrendReq. – User yesterday

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.