Alternating error responses
Summary
The problem of alternating responses as expected and error responses, like described in #17 (closed) and #18 (closed), still persists. This happens if the HTTP stream is kept open (keep-alive), no Content-Length
header is sent, the request body contains a valid payload, and there is more data after the valid payload.
Steps to reproduce
Taking the /irrelevant/file
-endpoint as example:
- Send a POST request to endpoint
/irrelevant/file
...- with body
{"abs_path":"a"}_
, - with header
Content-Type: application/json
, - with header
Connection: keep-alive
, - without header
Content-Length
- with body
- Send any request to any endpoint (may be identical)
cURL command to reproduce
curl --request POST 'http://localhost:4444/irrelevant/file' \
--header 'Content-Type: application/json' \
--header 'Content-Length:' \
--data-raw '{"abs_path":"a"}_' \
--next --request GET 'http://localhost:4444/bg'
Observed behavior
- first request succeeds with
{"result":true}
- second request fails with a HTML-formatted
400 Bad Request
message statingIllegal HTTP request
Desired behavior
I'm not sure what this should be. Depending on the additional payload supplied, the first request could be a 400 Bad Request
. If it's only white space the request may be considered ok. But if it is real garbage it should be dismissed.
Either way, the second request should succeed (if it is correctly formatted)
Solution
Internally, the predicate http_client:http_convert_data/4
(defined in library http/http_json) is used to parse the JSON payload. The behavior of this predicate depends on the Content-Length
header:
- If it is not supplied, the stream is read (and parsed) until parsing the first JSON object succeeds. Further data is left on the stream.
- If such a header is supplied, the stream is read (and parsed) until the full content length is read or parsing the first JSON object succeeds. Further data (up to the content's length) is discarded.
The first case leads to the described problem, as additional payload will "start" the next request making it invalid. The second case is also problematic itself, but this is not relevant for this issue.
There are different ways to circumvent the problem:
- Disallow POST requests without
Content-Length
header. - Always close the stream after each request (disregarding the
Connection
header) - After parsing the JSON payload, read the stream until the end and manually handle excess payload.