Exporting Chrome and Google Bookmarks to Evernote

Recently I started revisiting the way my data was spread across different cloud services. Up until now, I’ve been using Google Bookmarks to keep all my bookmarks, lots and lots of them, using mainly tags to keep organized. A few years ago when I decided to move from Delicious, it felt like an obvious choice. A few hiccups along the way, but so far, it worked really well. I even created a Chrome extension to help bookmarking and searching  bookmarks from the omnibar. A few things annoyed me, like presenting tags the same way as folders, all listed on the left sidebar, creating a loooong page with only links on the side.

Lately I also started bookmarking a few links on Evernote, specially references to technical articles and so on. In the spirit of simplifying, I decided to consolidate everything in the same place, Evernote. This meant I would have to migrate my 3k+ bookmarks from Google to Evernote, trying to preserve all tags and possibly creation dates.

Google Bookmarks doesn’t provide a huge list of exporting options, only one HTML format, with links grouped by tag. The interesting thing about the format is that bookmarks are duplicated if they have more than one tag. One annoying thing I would have to deal with.

Looked around for services or scripts that could help migrating, but not much luck. Most guides only suggested importing the HTML file directly to Evernote, creating one singe note, and that’s not what I wanted. Looked a little bit further and found a Python script on GitHub. Very simplistic. The script would parse the file and write the output in an Evernote format, or ENEX, a simple XML file with Evernote tags.

Tried it and it worked, at least it ran and created a new file. Tried to import it on Evernote and no luck. It failed giving no clue about what was wrong. I decided to play with the file a little bit and found the problem, a few spaces and empty lines in the wrong place. Fixed the script and import worked, sort of…

Remember I mentioned about duplicate entries on the exported file. The script did not account for that, so all entries with more than one tag were duplicated.  That would not work. Since the script was almost done, I just decided to fork it and fix this small things. Included a check for duplicated bookmarks. While I was at it, I also minified the output and included created dates to the exported fields.

If you are looking for something similar, the script is on GitHub…

https://github.com/spiermar/bookmarks2evernote

Use it, fork it, improve it. And let me know what you think! ;-)

How to Change the TimeOut on LoadRunner

I’m seeing a lot of searches for Timeout issues landing on the blog these days.

This is a very common issue when executing a scenario, meaning basically that the server has not responded in a specified amount of time. LoadRunner defaults to 120 seconds on all Web based protocols (HTTP, WebServices, Click & Script), but this can be easily changed using a command on the begging of the script, web_set_timeout.

The command have only two parameters, the operation and the new value. The operation can be one of these three:

  • CONNECT: To establish the connection to the Web server.
  • RECEIVE: Time–out on the next “portion” of server response to arrive.
  • STEP: Time–out on each VuGen step.

Usually the one we see expiring the most is STEP, for obvious reasons. The error message should look something like “Step Download Timeout”.

The second parameter is the new value, expressed in seconds. So if we want to set up a new value for STEP we have to insert this code in the beginning of our action:

web_set_timeout("STEP","240");

Being 240 seconds our new value.

Usually I change the timeout value of all three operations, just to be sure:

web_set_timeout("STEP","240");
web_set_timeout("CONNECT","240");
web_set_timeout("RECEIVE","240");

Simple, isn’t it??

We just have to be careful when changing this configuration, because the default value is already too high for most user actions.

From my point of view, this should only be used in two cases:

  • When we really expect a transaction to be slow, like a large report or file upload, something that the users already expect to be slow;
  • Also when we need to troubleshoot a slow transaction, meaning, waiting for a longer period to get the response.

That’s it!!

Creating Oracle 12 Scripts on LoadRunner

After receiving several questions about the new Oracle 11 protocol on LoadRunner and the version 12 of the E-Business suite, I thought it would interesting to give a few tips about these technologies too. Oracle Web Applications 11i is the new protocol used by the newer LoadRunner versions to script the E-Business suite. The advantage of it is that you can use a single protocol to script the application instead of using the multi-protocol approach. This can be useful if your license is limited or you don’t want to spend more money on protocols that you don’t have. Another advantage is that this protocol can be used to script the newer version of the E-Business suite, 12. Tip 1: Correlating the Login After recording your basic script, the first thing you will have to correlate on the Login is the ICX ticket. In order to do do this, you will have to add this command right before clicking on the functionality link:

web_reg_save_param("ICX",
"LB=var xgv15         \= \"",
"RB=\"\n",
"Search=Body",
LAST);

Eg.: If you accessing the “Submit Processes and Reports” functionality for example, you will have to add this code right before the “web_text_link(“Submit Processes and Reports”…” line. This parameter will be used later on the “nca_connect_server” and “frmservlet” requests. On both requests, look for the “icx_ticket=” part of the URL and replace the code to your parameter ({ICX}) You will also need to correlate another session parameter that sometimes is not correlated automatically and sometimes is wrongly correlated when you’re recording the script. Look for the “web_url(“frmservlet”…” line mentioned above and add the following block BEFORE it:

web_reg_save_param("Jsession",
"LB=/forms/lservlet;jsessionid\=",
"RB=",
"Search=Noresource",
LAST);

This will create the “Jsession” parameter that will be used on another call before the “nca_connect_server” call. Look for a line that starts with “web_url(“lservlet;jsessionid=” and replace on the URL parameter the large string that comes after “/lservlet;jsessionid=” with your parameter. Remember, do not replace the large session string on the title (first parameter), but on the URL parameter. Eg.:

web_url("lservlet;jsessionid=5ce9bad22e40aa7f7f9c4cff2549fc09a968476354cb39d14a0bd8636339170b.e38NaN0ObNiKai0LbNiSchaRaNaMe0",
"URL=http://<you_application_server>/forms/lservlet;jsessionid=<strong>{Jsession}</strong>?ifcmd=getinfo&ifhost=XXX&ifip=10.10.10.10",

To summarize things you will have to do 5 things. 1. Capture the ICX paramter 2. Replace the ICX parameter on the frmservlet call 3. Replace the ICX parameter on the nca_connect_server request 4. Capture the Jsession paramter 5. Replace the Jsession parameter on the lservlet;jsessionid= call This should also work if you’re scripting against an Oracle 11 application with the new LoadRunner protocol. And remember, this has worked for me, but may not work for you depending on how your application was deployed and configured.

Tip 2: Recording Sessions with HTTPS (SSL) Your application may be configured to use SSL (HTTPS connections). If this is your case, you probably won’t be able to record your scripts directly since LoadRunner is not able to identify which server you’re connecting to. To solve this issue you will have to manually add your application server address and port before recording. Click on the “Options” button on the lower left corner of the recording window (before you start recording). Select “Network > Port Mapping” on the left side pane and click on the “New Entry” button. This screen should be presented:

Load Runner Oracle SSL Server Entry

Target Server and Port are respectively your application server address and port. Service you can leave as “Auto Detect”. Record Type you can leave as “Proxy”. Set Connection type to “Auto”.

This will enable the SSL Versions and SSL Cyphers fields. With this fields you will have to check with your application administrator or spend some time “guessing” the correct configuration.

Click on Update and start recording the script. If everything was set correctly, LoadRunner should be recording your actions inside the application as usual.

How to evaluate the response time of a Citrix application

This came to my attention on the last few weeks since several projects came with this request and nobody had done this before.
First of all, I’d like to explain a little bit about how the Citrix protocol works on LoadRunner.
Once LoadRunner establishes a connection with the Citrix server, we have two kinds of transactions, Outbound and Inbound. The first comprehending keystrokes and mouse gestures going from the client to the Citrix server. The second comprehend window commands and screen refreshes going from the Citrix server to the client.
Differently from other protocols, were we send a package (HTTP post, WebService call, etc) and wait for the response to measure the response time, the Citrix protocol does not have this kind synchronization. Basically what happens is, we send a keystroke for example, the server responds with a acknowledgment, and that’s it, there is no way of knowing if the applications has processed the request and presented the results. Basically, there is no synchronization between Inbound and Outbound transactions.

In order to measure the response times for Citrix transactions, we need to find a way to “understand” what is coming from the Citrix server and manually synchronize it. There are two ways of doing that, the first is to monitor for window changes, like a window opening, etc. The second is to monitor a part of the screen for a refresh. The problem with both of them is to predict what is going to change, like the name of window that is going open or which part of the screen will refresh. This makes the scripting part really difficult and time consuming, because we cannot predict exactly what is going to be changed, like for example, we’re monitoring for a screen change, but an error message appears, the script will understand that there was a change on the screen and continue, consequently crashing.

Another point to notice is that Citrix is a proprietary solution, based on a widely know protocol (RDP). RDP’s performance is based solely on the bandwidth capacities and being a proprietary solution, we cannot change the protocol, consequently, we cannot increase the performance of the protocol, we can only work with the bandwidth between the Citrix server and the client. This bandwidth varies too much from time to time (peak hours, etc), so the results will vary a lot depending on the time we run the tests.
Based on this assumption, in order to have reliable results, we need to run several rounds of tests during different times, so we can have an idea about the overall performance.

Assuming these problems, I tried to find different ways of getting the response times from Citrix applications. Based on a few articles on the subject, I found that there are more reliable ways of doing that, other than using LoadRunner scripts to capture response times.
Going back to the requirements, we can separate the measure of “response time” to do certain action into three steps. The first is the time it takes from sending a command from the client to the Citrix server. The second is the time the application takes to process that request. The third is the time it takes for Citrix to send the screen refresh with the results to the client. So:

Send Keystroke + Time to Process + Send Screen Refresh = Overall Response Time to Perform an Action

On most cases it is easy and reliable to measure the “time to process” each action locally, without Citrix, so what we need to measure is the time it takes for sending a keystroke to Citrix and the time it takes for the client to receive the screen refresh. Basically is the delay of sending a keystroke and the delay of receiving the screen.

We know that this delays are affected mainly by three factors, the bandwidth limitations, the latency and the amount of data transferred. The first step is to find out how much data is transferred. We can do that by two ways, one is to use a sniffer on the network to capture exactly how much data is sent and received on each transaction. The second is to estimate this amount based on known factors.
For the outbound transactions, this is quite easy, since mouse gestures and keystrokes usually consume the same bandwidth. For the inbound transactions we know that the RDP protocol uses JPG compression to send the screen refreshes, and it only sends the updates for the parts that were changed. This way we can estimate the amount of data transferred by using a simple formula to calculate the JPG size using the image size, density and color content of the refreshed areas.

Knowing the amount of data to be transferred on inbound and outbound transactions we have two options to know the delay. The first is to measure the Round Trip Time (RTT) from the Citrix to the client and the client to the Citrix server, using the amount of data estimated before. The second is to estimate the RTT with known formulas, that use this three factors to estimate the probable delay on the RTT.

After all this technical explanations, I believe that the best solution is to:

  1. Measure the time to process each action locally, without Citrix.
  2. Estimate the average amount of data transferred from the client to the Citrix server, using the standard values for keystrokes and mouse gestures.
  3. Estimate the average amount of data transferred from the Citrix server to the client on the screen refreshes, using captures JPG images of the updates screens.
  4. Measure the RTT with the estimated data for the outbound transactions on different times. This can be easily done by a network engineer once we know the amount of data.
  5. Measure the RTT with the estimated data for the inbound transactions on different times.
  6. Sum all this measures and define our average response time to execute an action over Citrix.

This way we have more reliable results, already on a fine grain, making it easier to identify problems, since we will know if the response times are high on the processing part or the Citrix part.

Note that we’re not trying to measure the Citrix server performance from a load perspective. There are known metrics for Citrix servers that usually satisfy the sizing requirements.