添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
soup = get_link_decode ( URL ) for link in range ( 10 ) : link = soup . find_all ( "a" , text = "{}" . format ( link ) ) list_links_docs . append ( link ) for link in list_links_docs : lis = link . get ( "href" ) print ( lis )

执行这个程序出现错误

Traceback (most recent call last):
  File "D:\APP\PyCharm 2018.1.4\helpers\pydev\pydevd.py", line 1664, in <module>
    main()
  File "D:\APP\PyCharm 2018.1.4\helpers\pydev\pydevd.py", line 1658, in main
    globals = debugger.run(setup['file'], None, None, is_module)
  File "D:\APP\PyCharm 2018.1.4\helpers\pydev\pydevd.py", line 1068, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "D:\APP\PyCharm 2018.1.4\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "D:/PyDate/Climb_CTOLib.com/Android_Climb2.py", line 73, in <module>
    lis = link.get("href")
  File "D:\APP\Anaconda3\lib\site-packages\bs4\element.py", line 1884, in __getattr__
    "ResultSet object has no attribute '%s'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?" % key
AttributeError: ResultSet object has no attribute 'get'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?

在后面get的时候,因为前面取a标签的时用了find_all方法,返回了一个ResultSet(结果集),导致出错。

下面是获取到的列表

[[], [<a href="/android/docs/android-pg-1.html">1</a>], [<a href="/android/docs/android-pg-2.html">2</a>], [], [], [], [], [], [], []]

不用find_all()选择器,换成find()选择器

list_links_docs = []
  soup = get_link_decode(URL)
  for link in range(10):
      link = soup.find("a",text="{}".format(link))
      list_links_docs.append(link)
  for link in list_links_docs:
      lis = link.get("href")
      print(lis)

07022004改

list_links_docs = []
  soup = get_link_decode(URL)
  for link in range(10):
      link = soup.find("a",text="{}".format(link))
      lis = link.get("href")
      list_links_docs.append(lis)
                    问题:list_links_docs = []  soup = get_link_decode(URL)  for link in range(10):      link = soup.find_all("a",text="{}".format(link))      list_links_docs.append(link)  for link in list_links_docs:      lis = link.get("href")      print(lis)执行这个程序出现
				
问题解决AttributeError: module ‘paddle.fluid’ has no attribute ‘EndStepEvent’问题描述解决思路问题解决 问题描述 在使用paddle.fluid导入EndStepEvent过程中 global step if isinstance(event, fluid.EndStepEvent): if event.step == 0: plot_cost.append('Train Cost', step, event.metrics[0])
业务系统的数据,一般最后都会落入到数据库中,例如MySQL、Oracle等主流数据库,不可避免的,在数据更新时,有可能会遇到错误,这时需要将之前的数据更新操作撤回,避免错误数据。 Spring的声明式事务能帮我们处理回滚操作,让我们不需要去关注数据库底层的事务操作,可以不用在出现异常情况下,在 try / catch / finaly 中手写回滚操作。 Spring的事务保证程度比行业...
res = requests.get(url_temp) bs = BeautifulSoup(res.text,'html.parser') movie_list = bs.find_all('ol',class_="grid_view") **for i in movie_list.find_all('li'):** num = ...
这个错误信息表示你在尝试访问一个字符串对象的 gauNB 属性,但是字符串对象并没有这个属性。 这个错误通常是由于你在尝试访问一个字符串对象的属性或方法,但是字符串并没有这个属性或方法造成的。 举个例子,假如你执行了以下代码: string = "Hello World" print(string.gauNB) 那么你会看到类似于这样的错误信息: AttributeError: stro...
"ResultSet object has no attribute ‘%s’. You’re probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?" % key AttributeError: ResultSet obje...
经过查找,SQL Alchemy没有自带ON DUPLICATE KEY UPDATE功能,所以打算手写一个。于是产生了下面的(错误)脑洞… 先按照想要UNIQUE的列查出结果,然后判断: 如果查询有结果,就用新Object的所有列去替换原有Object的所有列 如果查询无结果,就直接db.session.add这个Object 以上方式没有考虑到新的Object可能有些属性... page_list = bs4_obj.find_all("div",attrs={"class":"paginator"}) 得到的是'bs4.element.ResultSet'类型的结果 使用for循环取出结果中的网址时,报错: for page_ele in page_list.find_all("a"): print(page_ele.attrs.get("hr
最近在学BeautifulSoup,我就尝试用这个来下载一本红楼梦/ wenben=new_soup.find_all('div',{'class':'chapter_content'}) print(wenben.text) 就报错:ResultSet object has no attribute 'text’后面一大堆 for wenben in new_soup.find_all('div',{'class':'chapter_content'}):