VBScript Timer() function precision

This article explores the numeric precision of the ASP and VBScript Timer() function, outlining that it may be more accurate than it initially appears.

 

Timer()

Prior to the arrival of PowerShell. The VBScript Timer() function wass the closest thing that script creators had for undertaking high precision timings in the Windows Scripting Host (WSH) environment.

The Timer() function returns a VBSingle – aka a Single Prevision floating point number or a “Real” – value as a representation of the systems real time clock. As a Single precision value, the permitted range is -3.402823E38 to -1.401298E-45 for negative values; 1.401298E-45 to 3.402823E38 for positive values. If you print out the value of Timer() in ASP/VBScript

WScript.Echo Timer()

You will get a value like

52191.62

This value is a representation of the number of seconds that have elapsed on the local executing system since the local system click last hit midnight. Consequently, evaluating Timer() against a DateDiff() evaluation of the number of seconds since Midnight will result in the same answer (save for the decimal point).

WScript.Echo Timer()
WScript.Echo DateDiff("s", #2019-07-01#, Now())

Which results in

52191.62
52191

 

The hidden detail

Masked in the 2 decimal places default printout. Timer() is significantly more detailed than just two decimal points. Depending on the precision of your systems Real Time Clock (RTC) the precision may be up to 7 decimal places. You can view your systems capability by subtracting the Integer from the timer value:

WScript.Echo Timer() - Int(Timer())

Which may result in a value such as

0.6171875

At 7 decimal places, the precision of Timer() is – floating point number inaccuracy side – considerably better than that of VBDateTime. Under VBDateTime, the second is the atomic value, offering no more precision.

0         - Second
0.6       - Decisecond  / 1 tenth
0.61      - Centisecond / 1 hundredth
0.617     - Millisecond / 1 thousandth
0.6171    -               1 ten thousandth
0.61718   -               1 hundred thousandth
0.617187  - Microsecond / 1 millionth
0.6171875 -               1 ten millionth

This demonstrates that there is flexibility in VBScript for more precise clock operations. But is the resolution high enough?

 

Fidelity

The resolution and fidelity of the Timer() function is what makes it valuable (or not) to a programmer. On a modern system, with a High Precision Event Timer (HPET), the update interrupt will only fire so many times to update the clock.

Running an imprecise test as follows:

for i = 0 to 999
  WScript.Echo Timer() - Int(Timer())
next

The output value only changed between 10 and 31 cycles (reflective of the CPU scheduler performing other tasks during execution. The counter incrementation was consistent, updating 48 times with an increment of between 0.0117187 and 0.0195313 seconds. Once every 20.8 cycles on average.

0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.0195313
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.015625
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187
0.0117187

In practice this means that the viable, comparable resolution of Timer() is not much better than once aver 0.2 seconds. Over a longer time period, Timer() can offer higher accuracy: provided you aren’t polling for an update more than every 0.2 of a second.

SQL Server combining LIKE with IN (SELECT LIKE IN query)

System Requirements

  • SQL Server

 

The Problem

This was one of those “it should be obvious” moments.

Dating back to 2004 in its current iteration, the hpcfactor.com membership database has been blighted by SPAM. We are no different to any web community in this regard. A can be a particular problem as many accounts that we may miss are registered and (if they survive) become a valuable commodity to sell as trojan accounts for use in mischief at a later date.

The moderation team at HPC:Factor does a great job of ensuring that these ‘pay per post’ (usually from Indian IP’s) urchins are wasting their time, but it does not stop them from trying. At one point we were having to content with several hundred a day, and had to shutdown registration completely. A custom layer that I wrote to sit on-top of the BBS registration system has done a good job to convert that SPAM torrent into a trickle of a “couple of times a month”.

Over the years, I have added more and more banned email domains to a ban list that is maintained in the user registration database. It occurred to me that as that list has increased, the historic pool of user account has never been re-parsed against the updated banned domains list.

Sounds simple right? Fire up SQL Server Management Studio and create a SQL SELECT LIKE IN query… is it that simple?

 

The Solution

Yes actually it is.

The obvious logic that you want to follow is to take this syntax

SELECT id, username, emailaddress
FROM members
WHERE emailaddress IN (
   SELECT emailaddress FROM bannedemails
)

… and merge it with a LIKE statement to create a SELECT LIKE IN query, something like

SELECT id, username, emailaddress
FROM members
WHERE emailaddress LIKE IN (
   SELECT '%' + emailaddress FROM bannedemails
)

Unfortunately however this syntax is completely illegal.

The answer is to perform the LIKE/IN as part of an INNER JOIN

SELECT members.id, members.username, members.emailaddress
FROM members
INNER JOIN bannedemails ON members.emailaddress LIKE '%' + bannedemails.emailaddress

 

Performance

This is a not a “real time” friendly SQL expression. The inefficiency of using LIKE evaluation becomes very apparent as the record set scales. The HPC:Factor database had 2,271 banned email domains and 30,336 user. Giving a total record iteration and evaluation count of 68,893,056 rows.

The total query execution time for this on was 2 minutes 13 seconds (ouch). Having said that, on reviewing why it took that long, I realised that at some point in time I was being extremely stingy with the SQL Server VM. It only had access to 2 CPU cores and 12 GB of RAM. The hypervisor server itself has access to a 10 core hyper-threaded Xeon.

 

The Result

I have a lot of respect and sympathy for people who use VPNs and temporary email addresses and I genuinely understand why they feel the need (or in some cases must, depending on what part of the world you are in).

I saying that, I also have to deal with the issue from the other side of the equation. If the website is having a bad day, it’s usually because of something coming from one or both of these camps. As ever in life, a minority will insist on spoiling a good thing for everyone else.

After running the query, we had only missed 8 user accounts – credit to the moderation team. The oldest account was 26/09/2006 and my favourite that we probably should not have missed was “mrputin” from January 2013.

PowerShell – Convert DER Encoded Certificate file into a Base64 .cer

System Requirements:

  • Windows PowerShell

The Problem:

If you have a binary encoded .cer (certificate) file that you need to get into a Base64 format, you can either follow the advice and use OpenSSL to convert it or you can import it into the Windows Certificate Store and re-export it.

If you want to do it automatically, without having to download and install anything else, neither option is particularly appealing.

The Fix

You can use the following function to convert the binary encoded file into a Base64 ASCII encoded file

function Convert-CertificateBinaryToBase64 {
param( [string]$SourceFile, [string]$DestinationFile )
$cert = get-content "$SourceFile" -Encoding Byte
$content = @(
'-----BEGIN CERTIFICATE-----'
[System.Convert]::ToBase64String($cert, 'InsertLineBreaks')
'-----END CERTIFICATE-----'
)$content | Out-File -FilePath "$DestinationFile" -Encoding ASCII
}

Example usage

Convert the file, retaining the source file

Convert-CertificateBinaryToBase64 -Sourcefile 'C:\myBinary.cer' -DestinationFile 'C:\myBase64.cer'

Convert the binary file, overwriting it with the Base64 file

Convert-CertificateBinaryToBase64 -Sourcefile 'C:\myCertificate.cer' -DestinationFile 'C:\myCertificate.cer'

Emulating ServerXmlHttpRequest with XmlHttpRequest in JavaScript

System Requirements:

  • A web browser
  • JavaScript

The Problem:

So you want to use ServerXmlHttpRequest in JavaScript? Well, let me spoil your afternoon for you. You can’t. However that are a few things that – if you are desperate enough – might be able to help you overcome the problem.

This document offers a high level overview of my thinking on the problem.

More Info

XmlHttp is an extremely power tool. As a coding apparatus, it has brought about Asymmetric JavaScript And Xml (AJAX) which has redefined the web user interface paradigm. Yet Microsoft also created something else, something very similar, but a little bit different, the ServerXmlHttp object.
There are several key differences between the two:

XmlHttp ServerXmlHttp
Intended use Client -> Server Server -> Server
Server -> Client
Concurrent Connection Limit (per-domain, session*) Depends upon the browser
2 in legacy IE, raised to 6 in IE 9 or higher?++
None (RAM and socket availability are practical limits however)
5460 in MS XML 3.0
Concurrent Connection Limit (session*) 17++ None (RAM and socket availability are practical limits however)
5460 in MS XML 3.0
Works in JavaScript Yes No
Works in JScript Yes Yes
Works in VBScript Yes Yes (ASP / IE)
Works in COM Yes Yes
Supports Sessions Yes No
Supports Cookies Yes No
Supports Http Proxy discovery Yes No (use proxycfg.exe)
Supports Authentication Yes No
URL Caching Yes No
HTTP/1.1 Chunking Yes No
Works in ‘Offline’ mode Yes No
Supports Gopher Yes No
Supports FTP Yes No
Provider URLMon/WinInet WinHTTP
Availability MS XML 2.0 or higher
MS IE 5.0 or higher
MS XML 3.0 or higher
MS IE 5.01 or higher

* This number reflects the entire session. If you are loading in a web browser, this includes the page, each image loading concurrently on the page, all scripts being pulled in. Once they are all loaded (or timeout), XmlHttp can have access to the full number, but not before.
++ You can use browserscope to assess the connection limits for different browser versions
http://www.browserscope.org/?category=network&v=top

Why would you ever need to do this?

The reason is specifically related to performing tasks that take a very long time to complete without encountering a blocking I/O state on the web browser. In 2017, that means JavaScript and simply put, you cannot use ServerXmlHttpRequest in JavaScript and make your web application browser independent.

If you need to ask a server to calculate PI to 1,000,000 digits and then for scientific reasons need it to verify that 9 more times, get the results and compare the answers – and you want to do it asynchronously. You cannot do this with XmlHttp (especially if you don’t want your web browser to appear to crash to the end use).

If you want to have the browser appear to still be responsive while it waits for the 10 answers to come back, you need to find a different solution.

So what can you do if you require more connections than the client allows?

You are fairly limited here in what you can do as this is an intended design of XmlHttp and not a limitation – it is a good one too otherwise there would be a considerable security impact (DDOS) as well as performance implications for things such as battery life, CPU usage caps and the negative effects of JavaScript on the browsing public.

Make it IE only

Hard code your solution so that it only works in IE and make any necessary client changes to the IE configuration to make it work correctly.

Obviously, this isn’t a serious solution.

Modify the registry/browser configuration to increase the concurrent limit count

Again, obviously, this isn’t a serious solution.

Use multiple domains/sub-domains

Reduce the number of elements loading on the page from the same source (having a separate domain/sub-domain for images and include files is often stated as an aid here).

Use additional server side worker processes to implement

After some trial and error, I realised that the connection limits are in practice imposed against the HTTP connection session on a per-port basis. If you create multiple IIS work processes to receive and process requests from the same client, you can increase considerably the number of connections made to the same domain from a single client. But it does consume more server resources and it does require some “thread pool” (I use the term lightly) management logic be added to the client.

How do you do that? Bindings. For each additional socket and binding that you create (using the same domain name) you can double the number of concurrent per-domain connections. For example

Let’s connect to the non-existent web address http://www.myajaxdomain.com/ajax-api/calculatePi/1000000

That is actually shorthand for http://www.myajaxdomain.com:80/ajax-api/calculatePi/1000000.

If in IIS you add additional port binding for the EXACT SAME website and domain name for Port 81, 82, 83, 84 and 85 you will have added significantly more concurrent connections to the potential pool.

In this example, you can treat the port 80 instance as the reserved for the client (if a user clicks away in the browser, the page will respond immediately because all other traffic is dead-locked on 81, 82, 83, 84 or 85.

With the socket connections to 81-85 you create a pseudo-thread pool and allocate workers in the client to each of the pool entries. You then need to register the correct call back function in JavaScript to allocate and deallocate each “thread” on the client.

Your web browser can now wait for the 10 results to come back, sending the next request as a new “thread” becomes available on the server.

Example “thread pool” call to /ajax-api/calculatePi/1000000

http://www.myajaxdomain.com:80 Reserved
http://www.myajaxdomain.com:81 Available = False
XmlHttpStatus = 3
Callback = myCallbackFunction()
http://www.myajaxdomain.com:82 Available = False
XmlHttpStatus = 1
Callback = myCallbackFunction()
http://www.myajaxdomain.com:83 Available = True
XmlHttpStatus = null
Callback = null
http://www.myajaxdomain.com:84 Available = True
XmlHttpStatus = null
Callback = null
http://www.myajaxdomain.com:85 Available = True
XmlHttpStatus = null
Callback = null

You will need to implement CORS headers in order to use this workaround successfully, however it does work although you will NOT be able to share Session information between each socket connection (unless you implement session awareness / sharing in your application layer).

I am going to compare this idea (which I have used in anger and has saved me after much thought and days of time trying to find a solution) to that of creating a thread pool. It is in effect a logical creation of a client side thread-pool. It requires a lot of work in the client code to create and manage the “threads” and it needs a lot of checking to ensure that you do not wind up dead-locking your browser session. Consequently, this approach is not for the faint hearted.

That said, it keeps the entire code base client side and relieves pressures on the creation of state awareness and the server side session management subsystem. It also reduces the risk to your IIS application because you are (or can) use additional IIS worker processes in isolation. You can even create multiple IIS website whose bindings are completely isolated from each other and share the code base.

The disadvantage is that clients will need to have firewall access to each of the new socket addresses that you create, which can be a problem for a publicly accessible web application.

Implement a Server Side Queue

The final way that you can overcome the limitation is to create a traceable queue and perform all of the operations in the server session process. This basically moves the creation of the “thread pool” from the client, onto the server, allowing the server to process the logic using ServerXmlHttpRequest, bypassing any limits.

This would look something akin to the following:

  1. Client contacts web services and requests the function for calculating Pi to 1000000 places 10 times http://www.myajaxdomain.com/ajax-api/calculatePi/1000000/10.
  2. Server creates a worker pool with 10 slots in the session, generates a unique transaction ID (abz678), starts them running (this is where the black magic needs to occur in your solution) and sends the transaction ID back to the client
  3. The client disconnects
  4. Every n seconds the client contacts the server, this time sending with it the transaction ID and the entry that it wants the result for http://www.myajaxdomain.com/ajax-api/getPiCalulationResult/abz678/5. The server looks up the results and returns the status to the client e.g.
    {
    Transaction: ‘abz678’,
    Index: 5,
    Status: ‘Complete’,
    Result: ‘3.14159265359…………’
    }or{
    Transaction: ‘abz678’,
    Index: 5,
    Status: ‘Processing’,
    Result: null
    }

    or

    {
    Transaction: ‘abz678’,
    Index: 5,
    Status: ‘Failed,
    Result: ‘Error -123456 occurred’
    }

  5. Once the client has received a Complete/Failed return for each of the 10 iterations, it will terminate activity and it will be up to you and the server how it cleans-up.

This is the most complicated method to implement. You still have all of the “pool” headaches to contents with, just in a different place. Additionally, you have to persist processing state in your web application and you need to ensure that the server is processing everything wholly asynchronously, something that may actually require additional code to be present on the web server to provide CRON/scheduler service support, or a fully blown queue pick-up service to be available.

You also need to convert your system into what is effectively a message passing system which isn’t an easy task to perform if you are maintaining or converting a system.

The advantage of this approach is that it is the best way to undertake the task. The disadvantage however is that you can quickly get into an infinite spiral of abstraction that ultimately pushes the point where the work is performed in your programme to increasingly lower levels.

At that point, you’ve probably forgotten you were trying to calculate Pi in the first place.