添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

完成报错信息如下:

[2019-06-12 18:12:13.199][WARN ][][ org.apache.kafka.common.network.Selector.poll(Selector.java:276)
] ==> Error in I/O with /192.168.10.165
java.io.EOFException: null
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
        at java.lang.Thread.run(Thread.java:745)

使用卡夫卡版本信息如下:

<!-- https://mvnrepository.com/artifact/org.springframework.integration/spring-integration-kafka -->
<dependency>
    <groupId>org.springframework.integration</groupId>
    <artifactId>spring-integration-kafka</artifactId>
    <version>1.3.0.RELEASE</version>
</dependency>

因为项目需要,使用的卡夫卡版本为kafka_2.10-0.8.2.2.jar,版本较老,但是后台架构采用springboot2.0.0.RELEASE,所以采用spring-integration-kafka来配置。

先说结果吧:

首先这个错误没有太大影响,只是0.8版本的问题,当版本升到0.9及以上时,就不会出现这个问题了。如果依旧使用0.8版本,可以参考 KAFKA-3205 Support passive close by broker 中的 Support passive close by broker以及Fix white space附件。

先看生产者配置文件spring-kafka-producer.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration"
    xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
    xmlns:task="http://www.springframework.org/schema/task"
    xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
   <!-- commons config -->
    <bean id="stringSerializer" class="org.apache.kafka.common.serialization.StringSerializer"/>
    <bean id="kafkaEncoder" class="org.springframework.integration.kafka.serializer.avro.AvroReflectDatumBackedKafkaEncoder">
        <constructor-arg value="java.lang.String" />
    </bean>
    <bean id="producerProperties"
        class="org.springframework.beans.factory.config.PropertiesFactoryBean">
        <property name="properties">
            <props>
                <prop key="topic.metadata.refresh.interval.ms">3600000</prop>
                <prop key="message.send.max.retries">5</prop>
                <prop key="serializer.class">kafka.serializer.StringEncoder</prop>
                <prop key="request.required.acks">1</prop>
            </props>
        </property>
    </bean>
    <!-- topic test config  -->
    <int:channel id="kafkaTopicSend">
        <int:queue />
    </int:channel>
    <int-kafka:outbound-channel-adapter
        id="kafkaOutboundChannelAdapterTopicTest" kafka-producer-context-ref="producerContextTopicTest"
        auto-startup="true" channel="kafkaTopicSend" order="3">
        <int:poller fixed-delay="1000" time-unit="MILLISECONDS"
            receive-timeout="1" task-executor="taskExecutor" />
    </int-kafka:outbound-channel-adapter>
    <task:executor id="taskExecutor" pool-size="5"
        keep-alive="120" queue-capacity="500" />
    <int-kafka:producer-context id="producerContextTopicTest"
        producer-properties="producerProperties">
        <int-kafka:producer-configurations>
            <!-- 多个topic配置 -->
            <int-kafka:producer-configuration
                broker-list="192.168.10.170:9092,192.168.10.170:9093,192.168.10.170:9094"
                key-class-type="java.lang.String"
                key-serializer="stringSerializer"
                value-class-type="java.lang.String"
                value-serializer="stringSerializer"
                topic="dealMessage" />       
           <int-kafka:producer-configuration
                broker-list="192.168.10.170:9092,192.168.10.170:9093,192.168.10.170:9094"
                key-class-type="java.lang.String"
                key-serializer="stringSerializer"
                value-class-type="java.lang.String"
                value-serializer="stringSerializer"
                topic="keepAlive" />
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>
</beans>

消费者配置文件spring-kafka-consumer.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xmlns:int="http://www.springframework.org/schema/integration"
     xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
     xmlns:task="http://www.springframework.org/schema/task"
     xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
    http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
    <!-- topic test conf -->
    <int:channel id="inputFromKafka" >
        <int:dispatcher task-executor="kafkaMessageExecutor" />
    </int:channel>
    <!-- zookeeper配置 可以配置多个 -->
    <int-kafka:zookeeper-connect id="zookeeperConnect"
        zk-connect="192.168.10.170:2181" zk-connection-timeout="6000"
        zk-session-timeout="12000" zk-sync-time="200" />
    <!-- channel配置 auto-startup="true"  否则接收不发数据 -->
    <int-kafka:inbound-channel-adapter
        id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext"
        auto-startup="true" channel="inputFromKafka">
        <int:poller fixed-delay="1" time-unit="MILLISECONDS" />
    </int-kafka:inbound-channel-adapter>
    <task:executor id="kafkaMessageExecutor" pool-size="8" keep-alive="120" queue-capacity="500" />
    <bean id="kafkaDecoder"
        class="org.springframework.integration.kafka.serializer.common.StringDecoder" />
    <bean id="consumerProperties"
        class="org.springframework.beans.factory.config.PropertiesFactoryBean">
        <property name="properties">
            <props>
                <prop key="auto.offset.reset">smallest</prop>
                <prop key="socket.receive.buffer.bytes">10485760</prop> <!-- 10M -->
                <prop key="fetch.message.max.bytes">5242880</prop>
                <prop key="auto.commit.interval.ms">1000</prop>
            </props>
        </property>
    </bean>
    <!-- 消息接收的BEEN -->
    <bean id="kafkaConsumerService" class="cn.test.kafka.KafkaConsumerService" />
    <!-- 指定接收的方法 -->
    <int:outbound-channel-adapter channel="inputFromKafka"
        ref="kafkaConsumerService" method="processMessage" />
    <int-kafka:consumer-context id="consumerContext"
        consumer-timeout="1000" zookeeper-connect="zookeeperConnect"
        consumer-properties="consumerProperties">
        <int-kafka:consumer-configurations>
            <int-kafka:consumer-configuration
                group-id="default1" value-decoder="kafkaDecoder" key-decoder="kafkaDecoder"
                max-messages="5000">
                <!-- 两个TOPIC配置 -->
                <int-kafka:topic id="dealMessage" streams="4" />
                <int-kafka:topic id="keepAlive" streams="4" />
            </int-kafka:consumer-configuration>
        </int-kafka:consumer-configurations>
    </int-kafka:consumer-context>
</beans>

项目启动后,kafka配置信息:

[2019-06-12 15:09:42.093][INFO ][][ org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:113)
] ==> ProducerConfig values: 
	compression.type = none
	metric.reporters = []
	metadata.max.age.ms = 300000
	metadata.fetch.timeout.ms = 60000
	acks = 1
	batch.size = 16384
	reconnect.backoff.ms = 10
	bootstrap.servers = [192.168.10.170:9092, 192.168.10.170:9093, 192.168.10.170:9094]
	receive.buffer.bytes = 32768
	retry.backoff.ms = 100
	buffer.memory = 33554432
	timeout.ms = 30000
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	retries = 0
	max.request.size = 1048576
	block.on.buffer.full = true
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
	metrics.sample.window.ms = 30000
	send.buffer.bytes = 131072
	max.in.flight.requests.per.connection = 5
	metrics.num.samples = 2
	linger.ms = 0
	client.id = 

参考三个网站:

1. https://stackoverflow.com/questions/33432027/kafka-error-in-i-o-java-io-eofexception-null

内容大致如下:

I am using Kafka 0.8.2.0 (Scala 2.10). In my log files, I see the following message intermittently. This seems like a connectivity issue, but I’m running both in my localhost.
Is this a harmless warning message or should I do something to avoid it?

我正在使用Kafka 0.8.2.0(Scala 2.10)。在我的日志文件中,我间歇地看到以下消息。这似乎是一个连接问题,但都运行在我的localhost中。
这是一条无害的警告信息,我应该做些什么来避免它?

2015-10-30 14:12:38.015  WARN 4251 --- [ad | producer-1] [                                    ] o.apache.kafka.common.network.Selector   : Error in I/O with localhost/127.0.0.1
java.io.EOFException: null
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
    at java.lang.Thread.run(Thread.java:745)
Phil Brock:

This is a bit later to the party, but may help someone - it would have helped me.

What you’re seeing occurs because the Kafka broker is passively closing the connection after a certain period of idleness is exceeded. It’s defined by this broker property: connections.max.idle.ms - the default is 10 minutes.

Apparently the kafka client in 0.8.x doesn’t honour that setting and just leaves idle connections open. You’ll see the warning in your logs but it should have no bad effect on your application.

More details here: https://issues.apache.org/jira/browse/KAFKA-3205

The broker config is documented here: https://kafka.apache.org/090/documentation/#configuration

In that table you’ll find:

Name: connections.max.idle.ms
Description: Idle connections timeout: the server socket processor threads close the connections that idle more than this
Type:long
Default: 600000

Hope that helps.

这对讨论群组来说有点晚了,但可能对某人有所帮助 - 这对我有所帮助。

您看到的是因为Kafka broker 在超过一定的闲置时间后被动关闭连接。 它由此broker属性定义:connections.max.idle.ms- 默认值为10分钟。

显然,0.8.x中的kafka客户端不遵循该设置,只是让空闲连接保持打开状态。 您将在日志中看到警告,但它应该对您的应用程序没有任何不良影响。

更多细节:https://issues.apache.org/jira/browse/KAFKA-3205

broker配置在此处记录:https://kafka.apache.org/090/documentation/#configuration

在那里你会发现:

Name: connections.max.idle.ms
Description: Idle connections timeout: the server socket processor threads close the connections that idle more than this
Type:long
Default: 600000

希望对你有帮助。

2. https://issues.apache.org/jira/browse/KAFKA-3205

内容大致如下:

In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, producers seems to raise the following after a variable amount of time since start :

2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 172.22.2.170
java.io.EOFException: null
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.common.network.Selector.poll(Selector.java:248) ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]

This can be reproduced successfully by doing the following :

  1. Start a 0.8.2 producer connected to the 0.9 broker
  2. Wait 15 minutes, exactly
  3. See the error in the producer logs.
    Oddly, this also shows up in an active producer but after 10 minutes of activity.

Kafka’s server.properties :

broker.id=1
listeners=PLAINTEXT://:9092
port=9092
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/mnt/data/kafka
num.partitions=4
auto.create.topics.enable=false
delete.topic.enable=true
num.recovery.threads.per.data.dir=1
log.retention.hours=48
log.retention.bytes=524288000
log.segment.bytes=52428800
log.retention.check.interval.ms=60000
log.roll.hours=24
log.cleanup.policy=delete
log.cleaner.enable=true
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=1000000

Producer’s configuration :

compression.type = none
	metric.reporters = []
	metadata.max.age.ms = 300000
	metadata.fetch.timeout.ms = 60000
	acks = all
	batch.size = 16384
	reconnect.backoff.ms = 10
	bootstrap.servers = [127.0.0.1:9092]
	receive.buffer.bytes = 32768
	retry.backoff.ms = 500
	buffer.memory = 33554432
	timeout.ms = 30000
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	retries = 3
	max.request.size = 5000000
	block.on.buffer.full = true
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
	metrics.sample.window.ms = 30000
	send.buffer.bytes = 131072
	max.in.flight.requests.per.connection = 5
	metrics.num.samples = 2
	linger.ms = 0
	client.id = 

主要回答:

Mart Haitjema

I also ran into this issue and discovered that the broker closes connections that have been idle for connections.max.idle.ms (https://kafka.apache.org/090/configuration.html#brokerconfigs) which has a default of 10 minutes.
While this parameter was introduced in 0.8.2 (https://kafka.apache.org/082/configuration.html#brokerconfigs) it wasn’t actually enforced by the broker until 0.9.0 which closes the connections inside Selector.java::maybeCloseOldestConnection()
(see https://github.com/apache/kafka/commit/78ba492e3e70fd9db61bc82469371d04a8d6b762#diff-d71b50516bd2143d208c14563842390a).
While the producer config also defines this parameter with a default of 9 minutes, it does not appear to be respected by the 0.8.2.x clients which mean idle connections aren’t being closed on the client-side but are timed out by the broker.
When the broker drops the connection, it results in an java.io.EOFException: null exception on the producer-side that looks exactly like the one shown in the description.

To work around this issue, we explicitly set the connections.max.idle.ms to something very large in the broker config (e.g. 1 year) which seems to have mitigated the problem for us.

我也遇到了这个问题并发现broker关闭了对于connections.max.idle.mshttps://kafka.apache.org/090/configuration.html#brokerconfigs)空闲的连接,其默认值为10分钟。
虽然这个参数是在0.8.2(https://kafka.apache.org/082/configuration.html#brokerconfigs)中引入的,但直到0.9.0它才真正由broker强制执行:Selector.java中的maybeCloseOldestConnection()
(参见https://github.com/apache/kafka/commit/78ba492e3e70fd9db61bc82469371d04a8d6b762#diff-d71b50516bd2143d208c14563842390a)。
虽然生产者配置也定义了这个参数,默认值为9分钟,但0.8.2.x客户端似乎并不遵循它,这意味着空闲连接在客户端没有被关闭但被broker判断为超时。
当broker断开连接时,它会在生产者端导致java.io.EOFException:null异常,该异常看起来与描述中显示的完全相同。

为了解决这个问题,我们明确地将broker配置中的connections.max.idle.ms设置为非常大的东西(例如1年),这似乎已经为我们缓解了这个问题。

ASF GitHub Bot

GitHub user bondj opened a pull request:

https://github.com/apache/kafka/pull/1166

KAFKA-3205 Support passive close by broker

An attempt to fix KAFKA-3205. It appears the problem is that the broker has closed the connection passively, and the client should react appropriately.

In NetworkReceive.readFrom() rather than throw an EOFException (Which means the end of stream has been reached unexpectedly during input), instead return the negative bytes read signifying an acceptable end of stream.

In Selector if the channel is being passively closed, don’t try to read any more data, don’t try to write, and close the key.

I believe this will fix the problem.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bondj/kafka passiveClose

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1166.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

commit 5dc11015435a38a0d97efa2f46b4d9d9f41645b5
Author: Jonathan Bond <jbond@netflix.com>
Date: 2016-03-30T03:57:11Z
Support passive close by broker

This closes #1166

GitHub用户bondj打开了一个拉取请求:

https://github.com/apache/kafka/pull/1166

卡夫卡3205 broker支持被动关闭

试图修复 卡夫卡3205。看来问题是broker已被动地关闭了连接,客户端应该做出适当的反应。

在NetworkReceive.readFrom()中,而不是抛出EOFException(这意味着在输入期间意外地到达了流的末尾),而是返回负字节读取,表示可接受的流结束。

在Selector中,如果通道被动关闭,请勿尝试读取更多数据,不要尝试写入并关闭密钥。

我相信这将解决问题。

您可以通过运行以下命令将此拉取请求合并到Git存储库中:

$ git pull https://github.com/bondj/kafka passiveClose

或者,您可以查看并应用这些更改作为补丁:

https://github.com/apache/kafka/pull/1166.patch

要关闭此拉取请求,请
在提交消息中使用(至少)以下内容提交您的主/主干分支:

commit 5dc11015435a38a0d97efa2f46b4d9d9f41645b5
Author: Jonathan Bond <jbond@netflix.com>
Date: 2016-03-30T03:57:11Z
Support passive close by broker

关闭#1166

Flavio Junqueira

The changes currently in 0.9+ doesn’t have as many messages printed out because both ends, client and server, enforce the connection timeout. The change discussed in the pull request doesn’t print it in the case of a passive close initiated by the server (in 0.9 the timeout is enforced), which is desirable only because it pollutes the logs otherwise. It is better that we keep these messages in 0.9 and later to be informed of connections being closed. They are not supposed to happen very often, but if it turns out to be a problem, we can revisit this issue.

当前在0.9+中的更改没有打印出多少消息,因为两端(客户端和服务器)都强制执行连接超时。拉取请求中讨论的更改不会在服务器启动的被动关闭的情况下打印它(在0.9中强制执行超时),这只是因为它否则会污染日志。我们最好将这些消息保存在0.9及更高版本,以便获知关闭的连接。它们不应该经常发生,但如果它成为一个问题,我们可以重新审视这个问题。

3. https://github.com/apache/kafka/pull/1166

解决方案的作者:Jonathan Bond
提交的附件里面有代码上的解决方案。

完成报错信息如下:[2019-06-12 18:12:13.199][WARN ][][ org.apache.kafka.common.network.Selector.poll(Selector.java:276)] ==&gt; Error in I/O with /192.168.10.165java.io.EOFException: null at org.apac...
赠送jar包:kafka-clients-0.10.0.1.jar; 赠送原API文档:kafka-clients-0.10.0.1-javadoc.jar; 赠送源代码:kafka-clients-0.10.0.1-sources.jar; 包含翻译后的API文档:kafka-clients-0.10.0.1-javadoc-API文档-中文(简体)-英语-对照版.zip 对应Maven信息:groupId:org.apache.kafka,artifactId:kafka-clients,version:0.10.0.1 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。 双语对照,边学技术、边学英语。
已编译 Kafka-Manager-1.3.3.22 linux下直接解压解压kafka-manager-1.3.3.22.zip到/opt/module目录 [root@hadoop102 module]$ unzip kafka-manager-1.3.3.22.zip 4)进入到/opt/module/kafka-manager-1.3.3.22/conf目录,在application.conf文件中修改kafka-manager.zkhosts [root@hadoop102 conf]$ vim application.conf 修改zookeeper地址为: kafka-manager.zkhosts="hadoop102:2181,hadoop103:2181,hadoop104:2181" 5)启动KafkaManager [root@hadoop102 kafka-manager-1.3.3.22]$ nohup bin/kafka-manager -Dhttp.port=7456 >/opt/module/kafka-manager-1.3.3.22/start.log 2>&1 & 6)在浏览器中打开 备注:指定端口号看启动过程中 "-Dhttp.port=7456" 端口可以自己设置 http://hadoop102:7456
kafka-connect-zeebe 这个适用于器可以做两件事: 当工作流实例达到特定活动时,将消息发送到Kafka主题。 请注意,一条message更确切地说是一个卡夫卡record ,通常也称为event 。 这是Kafka Connect演讲中的消息来源。 消耗来自Kafka主题的消息,并将它们与工作流程相关联。 这是一个Kafka Connect接收器。 它可以与或独立的Zeebe经纪人合作。 有关实现的一些背景,请参。 示例和演练 以下视频引导您完成连接的示例: 安装和快速入门 您将在此处找到有关如何构建连接器以及如何运行Kafka和Zeebe的信息,以快速入门: 该插件带有两个连接器,即源连接器和接收器连接器。 源连接器激活Zeebe作业,将其发布为Kafka记录,并在将它们提交给Kafka后完成。 水槽连接器 在工作流模型中,您可以按名称等待某些事件(通
websocket java.io.EOFException: null ERROR org.apache.tomcat.websocket.pojo.PojoEndpointBase:175 - No error handling configured for [org.jeecg.modules.message.websocket.WebSocket] and the following error occurred java.io.EOFException: null
一、异常详细信息 java.lang.IllegalArgumentException: Invalid character found in method name. HTTP method names must be tokens at org.apache.coyote.http11.Http11InputBuffer.parseRequ...
最近发现webSocket连接,经常自动断开,看了晚上的一些文章,很多说是Nginx的问题,但是不想改Nginx因为怕影响其他系统,而且不一定有效,因此决定给webSocket加一个心跳机制: 1:先在服务端判断消息是不是心跳检测消息,是的话,原封不动将消息传给客户端即可: if("heartCheck".equals(jsonObject.getString("heartCheck"))){// 心跳检测的消息 sendMessage(message); retur
cdh启动kafka报错,错误代码如下 Error writing out kafka.log:type=Log,name=LogStartOffset,topic=sync_topic20200106100812,partition=0 org.eclipse.jetty.io.EofException at org.eclipse.jetty.io.ChannelEnd...
1、connection with xxxxx disconnected ERROR o.a.kafka.common.network.Selector - Connection with gdga-hd-kafka-003/68.29.196.30 disconnected 原理是:conn...
首先说明一下我出现这个错误是出现在websocket的服务端,由于websocket向前端传输的内容过大,超过了65536 bytes。这个问题是由于Gateway的原因。 Gateway中的错误为 ERROR o.s.b.a.w.r.error.DefaultErrorWebExceptionHandler - Failed to handle request [GET http://loca...
springboot整合redisCacheManager 的时候java.io.EOFException null的解决办法) 可能是配置文件properties中的mysql 数据库的驱动名字错误,或者关于数据库的其他配置错误 在此处我的是 url配置错误 改成 jdbc:mysql://localhost:3306/test既可以解决 次错误也有可能是其他配置问题,如在使用自定义redis...
每次请求接口都会报这个错,也不影响接口请求,就是看着报错很不舒服,查看了相关资料,报错的主要原因是header缓冲区大小不够,那么该如何修改缓冲区大小呢? 修改application.yml配置文件 server: port: 8080 **tomcat: max-http-post-size: 3145728**