Charles Web Debugging Proxy is a cross-platform HTTP debugging proxy server application written in Java. It enables the user to view HTTP, HTTPS, HTTP/2[3] and enabled TCP port traffic accessed from, to, or via the local computer. This includes requests and responses including HTTP headers and metadata (e.g. cookies, caching and encoding information) with functionality targeted at assisting developers analyze connections and messaging.
Testing and debugging are part of any good application development lifecycle. While there are a whole lot of tools and libraries out there for building Android applications, the Charles Web debugging proxy is a very impressive option when you consider the functions it performs.
Using this tool, you can cut down the time spent on debugging and solving issues. After several trials on my own, I was impressed with what I could accomplish and how easily and quickly I could debug and solve issues.
an HTTP proxy/HTTP monitor/Reverse Proxy that enables a developer to view all of the HTTP and SSL/HTTPS traffic between their machine and the Internet. This includes requests, responses, and the HTTP headers, which contain the cookies and caching information.
In a nutshell, the Charles Web Debugging Proxy is a software application that monitors network activities on personal computers, such as network requests and server responses. It intercepts and changes the data packets in real time.
Testing and debugging can be very frustrating and exhausting. Charles exists to reduce these kinds of work-induced stresses; with Charles in your development arsenal, you can quickly diagnose problems and implement quick fixes. I will outline a few use cases where I find Charles to be handy.
Network errors cannot be ignored when building mobile applications. They arise through network requests (i.e., internal server errors). Testing how your client-side logic responds to different network error responses can and rapid development.
The proxy we configured above will be used by web browsers, but may not be used by other apps. You need to add configuration to your app in order to have it trust the SSL certificates generated by Charles SSL proxying.
When writing a piece of software, you want to adopt a defensive programming approach to avoid unexpected outcomes in production, such as ensuring your app can gracefully handle network errors like a 500 internal server error.
Before we proceed to mock an error response, ensure you can intercept and decrypt the response from the TMDB API. If the set up is done correctly, you should have something similar to the screenshot below:
Aside from Map Remote, there is a corresponding function, Map Local. They have similar functions, but what differentiates them is the location from which the response is served. The former returns a response from an online server, while the latter serves the response from your local machine via a JSON file.
LogRocket also helps you increase conversion rates and product usage by showing you exactly how users are interacting with your app. LogRocket's product analytics features surface the reasons why users don't complete a particular flow or don't adopt a new feature.
Developers of web and mobile applications often take advantage of using HTTP proxies to debug their applications. This allows them to check the headers and body of outgoing requests, related incoming responses, track possible flaws of their client application and server backend without any traffic interruption. You are able to perform the same function with mobile applications secured by SeaCat Mobile Secure Gateway.
HTTP debugging proxy is an intermediary service placed between a client and an application backend that can view information about requests and responses. It allows you to look at the communication headers and body to make sure that for instance proper cookies are sent and/or verify if application backend responses are correct to specific requests etc.
So the question is, where does Charles Proxy come in? The communication between the client and SeaCat Mobile Secure Gateway is secured; there is no way to read what data resides in the traffic. That is why we have to put Charles Proxy between SeaCat Gateway and the Application Backend.On your development machine we are looking towards a setup viewed here:
The point here is to let SeaCat Mobile Secure Gateway use Charles Proxy as its application backend while Charles Proxy reads and displays content of requests, forwards them to the application backend, reads and displays the response and sends it back to the SeaCat Mobile Secure Gateway that delivers it to the client.
You can run your backend and use Charles Proxy to debug your mobile app secured by SeaCat Mobile Secure Gateway. You now have quick access to information like status codes, cookies, response/request times, etc.
The previous tutorial introduces several concepts and helps you understand the basic of REST API integration with iOS client written in Swift. There are several limitations, with data storage being the most important. This article provides instructions on how to work around this restriction.
As the market with Cloud Computing and Mobile devices is getting bigger, there is another specific option available. It's called (Mobile)Backend-As-A-Service (BAAS) and it is extremely useful in situations we want to subscribe a complex backend service (alongside the core backend solution, there is usually a lot of additional functionality and statistics) and primary focus on development of client part of mobile apps for instance.
One of the main research and development topics nowadays is data processing and analysis, which can help companies discover relevant information about their customers or technologies using reports, visualizations, dashboards, and other business intelligence outputs.
i read somewhere that the number of init.js requests that ESRi spawns is somehow related to the number of cpu cores you have. is that true? is there a way to configure or control the number of requests that esri makes?
Perhaps more importantly for your situation, we have previously investigated the browser caching of workers, which is related to the init.js requests you are seeing. We discovered that devtools (e.g. Chrome and Firefox) has misreported subsequent worker requests as HTTP 200's. We had to use Charles debugging proxy to verify they were actually being correctly cached as 304s. I don't currently have it installed but I can install/retest on our end. I did just notice that Safari is correctly reporting 304s.
Is this something you can double check on your application?
Yes, you can use Charles debugging proxy to determine if the requests are being cached for "/esri/core/workers/init.js". Also important, make sure you have unchecked "Disable caching" in the dev tools Network tab.
Hi @JonathanTiu, you should only see the first request to , depending on how you've configured Charles. You shouldn't see any other subsequent requests to the same modules, e.g. init.js. If the browser is caching correctly, there will only be that one request, confirming that browser caching is working correctly. Did that make more sense?
The reason i'm scrutinizing this so closely is because since we moved to 4.28, i've noticed we've gotten slower and i can see these 8 init.js requests get progressively slower. you can see they are stalled for long periods of time - over 100ms. That's huge.
Whereas in 4.27, we have the same 8 init.js requests but the longest any request has to wait is around 14ms. Absolute huge difference. I also notice the init.jst requests are launched earlier in 4.27. in 4.28 they seem to be launched later which further adds to the delay i'm seeing in our app
@JonathanTiu with regards to the increased init.js request times, thanks for the information. That might be related to a size increase in the worker between 4.27 and 4.28. I can also repro dev tools reporting the init.js requests being stalled when caching is enabled. We need to do some research. I'll open an issue pending some initial investigation. It's not clear what's actually happening versus what might be misreported in dev tools, e.g. 200 vs 304 status response codes.
i just ran again this morning our app with 4.28. i'm not seeing long stall times but i am seeing long server response times. below is screenshot of the 8th request. each request to , the server response wait time gets longer until it reached the peak (of 211ms) in the 8th request. Something bottlenecking?
@JonathanTiuI I verified in Charles while using Chrome that the SDK's wasm requests are being correctly cached. Chrome 119 dev tools appear to be incorrectly reporting cached file requests as HTTP 200s. Here's a screenshot:
Also, the wasm scripts are being loaded correctly (synchronously). This is the pattern defined by the WorkerGlobalScope of the browser's Web Workers API,. More information is available on MDN here: -US/docs/Web/API/WorkerGlobalScope/importScripts. As you may know, when stalls are reported in the network tab of the dev console, they can be the result of many different factors. Taking that into account, I didn't see anything unusual after multiple tests.
Charles Proxy is fantastic a cross-platform HTTP/HTTPS debugging proxy server application, probably the most user friendly of its genre. Its initial setup is painless and it can be configured trough a nice user interface.
I've recently started to use it a lot more than what I was used to at my daily job, so today I've spent some time checking in what way the Charles usage can be automated trough a command-line interface.
Being cross-platform, Charles is compatible with macOS, Linux and Windows. The instructions shown in this post are based on macOS, so if you're planning to follow it on another OS they might need some minor adjustments.
Charles supports a few command line options out of the box, documented here.
Unfortunately they seem to operate only as parameters for new Charles sessions, so you won't be able to run commands on a running instance of Charles.