添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?

anyone can tell me how to solve this? my code as below:

import requests  
r = requests.get('https://www.example.com')
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
records = []  
for result in results:  
    name = results.find('div', attrs={'class':'name'}).text 
    price = results.find('div', attrs={'class':'price'}).text[13:-11]
    records.append((name, price,))

I want to ask a close question.If I want to scrap multiple pages.the pattern like below,I use the code as below,but still scrap the first page only Can you solve this issue.

import requests  
for i in range(100):   
    url = "https://www.example.com/a/a_{}.format(i)"
    r = requests.get(url)
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
                Does this answer your question? Beautiful Soup: 'ResultSet' object has no attribute 'find_all'?
– AMC
                Mar 22, 2020 at 22:35
r = requests.get('https://www.example.com')
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
records = []  
for result in results:  
    name = result.find('div', attrs={'class':'name'}).text # result not results
    price = result.find('div', attrs={'class':'price'}).text[13:-11]
    records.append((name, price,))
                I want to ask a close question.I want to scrap multiple pages.example.com/a/a_1     example.com/a/a_2   example.com/a/a_3  -------.I use the code as below,but still scrap the first page only Can you solve this issue.import requests   for i in range(100):        url = "example.com/a/a_{}.format(i)"     r = requests.get(url) from bs4 import BeautifulSoup   soup = BeautifulSoup(r.text, 'html.parser')   results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
– Yan Zhang
                Feb 27, 2018 at 13:42
                Oh, I see. Instead of looping 100 times, loop through the list of pages (and index it if necessary).
– whackamadoodle3000
                Feb 28, 2018 at 6:48

Try this, remove 's' in 'results' in particularly name = results

your error code "name = results.find('div', attrs={'class':'name'}).text"

with one changes "name = result.find('div', attrs={'class':'name'}).text"

well, nice try!

import requests  
r = requests.get('https://www.example.com')
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
records = []  
for result in results:  
    name = result.find('div', attrs={'class':'name'}).text 
    price = result.find('div', attrs={'class':'price'}).text[13:-11]
    records.append((name, price,))
        

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.