Connecting over Https using Akka-Http client

571 views
Skip to first unread message

Jeroen Rosenberg

unread,
Jul 15, 2015, 10:13:14 AM7/15/15
to akka...@googlegroups.com
I'm trying to connect to a third party streaming API over HTTPS using "akka-stream-experimental" % "1.0-RC4" and "akka-http-experimental" % "1.0-RC4"

My code looks like this

class GnipStreamHttpClient(host: String, account: String, processor: ActorRef) extends Actor with ActorLogging {
  this: Authorization =>

  private val system = context.system
  private val endpoint = Uri(s"https://$host/somepath")
  private implicit val executionContext = system.dispatcher
  private implicit val flowMaterializer: Materializer = ActorMaterializer(ActorMaterializerSettings(system))

  val client = Http(system).outgoingConnectionTls(host, port, settings = ClientConnectionSettings(system))

  override def receive: Receive = {
    case response: HttpResponse if response.status.intValue / 100 == 2 =>
      response.entity.dataBytes.map(processor ! _).runWith(Sink.ignore)
    case response: HttpResponse =>
      log.info(s"Got unsuccessful response $response")
    case _ =>
      val req = HttpRequest(GET, endpoint).withHeaders(`Accept-Encoding`(gzip), Connection("Keep-Alive")) ~> authorize
      log.info(s"Making request: $req")
      Source.single(req)
        .via(client)
        .runWith(Sink.head)
        .pipeTo(self)
  }
}

As a result I'm getting an Http 404 response. This doesn't make much sense to me as when I copy the full url to curl it just works
curl --compressed -v -uuser:pass https://my.streaming.api.com/somepath

Also when I connect to a mock implementation of this streaming API using Http protocol, my code works fine (using outgoingConnection instead of outgoingConnectionTls).

What do I do wrong when making Https request? As far as I understand changing to outgoingConnectionTls should be enough for most cases. 

Any help is be appreciated!

Johannes Rudolph

unread,
Jul 15, 2015, 10:47:10 AM7/15/15
to akka...@googlegroups.com
Hi Jeroen,

is this a virtual host you are connecting against? This may hint towards the client not sending the TLS SNI extension correctly which could be a bug in akka-http or due to an old JDK version on your client. Which JDK version do you use?

https://en.wikipedia.org/wiki/Server_Name_Indication says that you would need at least JDK 7 to make SNI work (only relevant if the host you connect against is an HTTPS virtual host).

Also, you could try the just released 1.0 version (though I cannot think of a reason why that should fix it).

Johannes

Jeroen Rosenberg

unread,
Jul 15, 2015, 11:01:32 AM7/15/15
to akka...@googlegroups.com
Thnx Johannes for the swift reply :)

I'm using JDK 7. I strongly suspect the host I connect to (stream.gnip.com) to be a virtual host (as they also provide other endpoints such as api.gnip.com). I just tried with 1.0 and it gives me the same result.

Jeroen

Jeroen Rosenberg

unread,
Jul 15, 2015, 12:25:53 PM7/15/15
to akka...@googlegroups.com
Btw,

I tried connecting to the stream using plain old java.net.HttpUrlConnection
val connection = new java.net.URL("...").openConnection()
    // set headers
   connection.getInputStream
   connection.getResponseCode

this way it just works and I get status code 200. So it seems something goes wrong in akka http.

Jeroen 

Johannes Rudolph

unread,
Jul 15, 2015, 5:59:47 PM7/15/15
to akka-user
Hi Jeroen,

it would be very helpful if you could somehow come up with a
reproducer against some publicly accessible endpoint which would show
the issue. It seemed to work for all the URLs I tested.

Johannes
> --
>>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>>> Check the FAQ:
>>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/akka-user/kqFup-bylUg/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+...@googlegroups.com.
> To post to this group, send email to akka...@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.



--
Johannes

-----------------------------------------------
Johannes Rudolph
http://virtual-void.net

Jeroen Rosenberg

unread,
Jul 16, 2015, 10:05:57 AM7/16/15
to akka...@googlegroups.com
Hi Johannes, 

I found out that I made quite a stupid mistake. In my set up of the client I was already putting host & port and in the creation of the HttpRequest I was creating fully qualified URL instead of Uri without host and port. Apparantly, my mock service didn't care about this :/ It was also difficult to see the problem, because in the logs it appeared as if I was calling the correct URL.

Anyways, it's working as expected. I do have one small unrelated issue. The stream I am consuming is in Gzip format, so I'm unzipping the ByteStrings as they come in as part of my FlowGraph. However, when I try to unzip the ByteString coming out of the response.entity.dataBytes source I'm getting errors on the Gzip format. As if the Chunks I'm getting are incomplete. If I use java.net stuff combined with InputStreamSource:

val url = new URL(endpoint.toString)
val connection = url.openConnection().asInstanceOf[HttpURLConnection]
connection.setRequestProperty("Authorization", req.getHeader("Authorization").get.value)
connection.setRequestProperty("Accept-Encoding", req.getHeader("Accept-Encoding").get.value)
InputStreamSource(() => new GZIPInputStream(connection.getInputStream)).map { processor ! _ }.runWith(Sink.ignore)
 

It works fine. This ensures I'm getting elements line by line. Is the  response.entity.dataBytes chunking it differently by default or do you have any other idea what I'm doing wrong?

Jeroen

Johannes Rudolph

unread,
Jul 16, 2015, 10:24:11 AM7/16/15
to akka-user
Hi Jeroen,

On Thu, Jul 16, 2015 at 4:05 PM, Jeroen Rosenberg
<jeroen.r...@gmail.com> wrote:
> Anyways, it's working as expected. I do have one small unrelated issue. The
> stream I am consuming is in Gzip format, so I'm unzipping the ByteStrings as
> they come in as part of my FlowGraph. However, when I try to unzip the
> ByteString coming out of the response.entity.dataBytes source I'm getting
> errors on the Gzip format. As if the Chunks I'm getting are incomplete.

Can you show some code? Are you using
`akka.http.scaladsl.coding.Gzip`? If not, can you try if that works?
You should be able to either use `Gzip.decode(response)`,
`Gzip.decode(response.entity)`, or use `Gzip.decoderFlow` manually.

Johannes

Jeroen Rosenberg

unread,
Jul 16, 2015, 10:35:36 AM7/16/15
to akka...@googlegroups.com
Thanks, that did the trick. For the record, I was doing:
def gunzip(bytes: Array[Byte]) = {
  val output = new ByteArrayOutputStream()
  FileUtils.copyAll(new GZIPInputStream(new ByteArrayInputStream(bytes)), output))
  output.toString
}
 
... // further in the code as part of my Flow graph
.map(byteString => gunzip(byteString.toArray()))

I replaced it with 
.via(Gzip.decoderFlow)

Now it works :)

Thanks so much for your help!

Johannes Rudolph

unread,
Jul 16, 2015, 11:18:33 AM7/16/15
to akka-user
Hi Jeroen,

On Thu, Jul 16, 2015 at 4:35 PM, Jeroen Rosenberg
<jeroen.r...@gmail.com> wrote:
>> def gunzip(bytes: Array[Byte]) = {
>> val output = new ByteArrayOutputStream()
>> FileUtils.copyAll(new GZIPInputStream(new ByteArrayInputStream(bytes)),
>> output))
>> output.toString
>> }

This creates a new GZIPInputStream for every chunk (incidentally: how
the chunks are cut is not under your control). However, gzip
compression is stateful and recreating the GZIPInputStream will reset
the state every time you create a new instance. Therefore, it cannot
work as simple as this.

The `Gzip.decoderFlow`, in contrast, keeps this state between chunks.

Jeroen Rosenberg

unread,
Jul 17, 2015, 4:02:32 AM7/17/15
to akka...@googlegroups.com
That clarifies it. Thnx!
Reply all
Reply to author
Forward
0 new messages