Unable to update NuGet or Packages in Powershell due to “WARNING: Unable to download the list of available providers. Check your internet connection.”

When attempting to install or update PowerShell Modules, NuGet or NuGet packages in PowerShell 5. You receive one or more of the following errors

WARNING: Unable to resolve package source 'https://www.powershellgallery.com/api/v2/'.

The underlying connection was closed: An unexpected error occurred on a receive.

WARNING: Unable to download the list of available providers. Check your internet connection.

Equally, you may receive the same error when attempting to run a WGET or an Invoke-WebRequest command e.g.

wget https://www.google.com/

You are unable to install/update the software component or make an outbound internet connection.

This issue may be especially prevalent on IIS installations serving HTTPS websites.

The Fix

Conventional troubleshooting is fairly well documented on-line

  1. Ensure that you are actually able to open a https webpage in a web browser
  2. Ensure that your DNS is working correctly.
  3. Check to see whether wget can connect to a non-https site e.g.
    wget http://www.google.com/
  4. Check to see whether or not you need to use a Proxy server. If so, you must configure PowerShell to use your Proxy Server before you proceed. This may require you to to configure PowerShell with your Proxy Server credentials.
    $webclient=New-Object System.Net.WebClient
    $webclient.Proxy.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials

A less obvious issue to explore related to the default operating system security configuration for using SSL.

More Info

By default, Windows Server and Windows client will allow SSL3, TLS 1.0, TLS 1.1 and TLS 1.2. The .net Framework is also configured to allow these protocols, and, by default, any outbound request for a SSL site will attempt to use SSL3/TLS 1.0 as its default protocol.

In secure environments, where system administrators have enabled recommended best practice on Windows systems to disable the use of SSL1, 2,3 and TLS 1.0. PowerShell is not currently clever enough to internally compare its configuration to that of the operating system. consequently, when attempting to make an outbound https request in such an environment. PowerShell will attempt to use one of the older protocols which has been disabled by the operating system’s networking stack. Instead of re-attempting the request using a higher protocol. PowerShell will fail the request with one of the error messages listed at the beginning of the article.

As NuGet and Update-Module both attempt to make connections to Microsoft servers using HTTPS, they too will fail.

Encountering this issue on a SSL enabled IIS install will be more common, as it is more likely that system administrators will have applied best practice and disabled legacy encryption protocols on these servers. their public facing, high visibility should demand such a response.

To fix the issue there are two options:

  1. Reconfigure and reboot the system to re-enable client use of TLS 1.0 (and possibly SSL3) via
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\<protocol>\Client

    DisabledByDefault = 0
    Enabled = ffffffff (hex)

  2. Alternatively, you must set-up each PowerShell environment so that the script itself knows not to use the legacy protocol versions. This is achieved via the following code which restricted PowerShell to only using TLS 1.1 and TLS 1.2.
    [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls11,Tls12'

SQL Server combining LIKE with IN (SELECT LIKE IN query)

System Requirements

  • SQL Server

 

The Problem

This was one of those “it should be obvious” moments.

Dating back to 2004 in its current iteration, the hpcfactor.com membership database has been blighted by SPAM. We are no different to any web community in this regard. A can be a particular problem as many accounts that we may miss are registered and (if they survive) become a valuable commodity to sell as trojan accounts for use in mischief at a later date.

The moderation team at HPC:Factor does a great job of ensuring that these ‘pay per post’ (usually from Indian IP’s) urchins are wasting their time, but it does not stop them from trying. At one point we were having to content with several hundred a day, and had to shutdown registration completely. A custom layer that I wrote to sit on-top of the BBS registration system has done a good job to convert that SPAM torrent into a trickle of a “couple of times a month”.

Over the years, I have added more and more banned email domains to a ban list that is maintained in the user registration database. It occurred to me that as that list has increased, the historic pool of user account has never been re-parsed against the updated banned domains list.

Sounds simple right? Fire up SQL Server Management Studio and create a SQL SELECT LIKE IN query… is it that simple?

 

The Solution

Yes actually it is.

The obvious logic that you want to follow is to take this syntax

SELECT id, username, emailaddress
FROM members
WHERE emailaddress IN (
   SELECT emailaddress FROM bannedemails
)

… and merge it with a LIKE statement to create a SELECT LIKE IN query, something like

SELECT id, username, emailaddress
FROM members
WHERE emailaddress LIKE IN (
   SELECT '%' + emailaddress FROM bannedemails
)

Unfortunately however this syntax is completely illegal.

The answer is to perform the LIKE/IN as part of an INNER JOIN

SELECT members.id, members.username, members.emailaddress
FROM members
INNER JOIN bannedemails ON members.emailaddress LIKE '%' + bannedemails.emailaddress

 

Performance

This is a not a “real time” friendly SQL expression. The inefficiency of using LIKE evaluation becomes very apparent as the record set scales. The HPC:Factor database had 2,271 banned email domains and 30,336 user. Giving a total record iteration and evaluation count of 68,893,056 rows.

The total query execution time for this on was 2 minutes 13 seconds (ouch). Having said that, on reviewing why it took that long, I realised that at some point in time I was being extremely stingy with the SQL Server VM. It only had access to 2 CPU cores and 12 GB of RAM. The hypervisor server itself has access to a 10 core hyper-threaded Xeon.

 

The Result

I have a lot of respect and sympathy for people who use VPNs and temporary email addresses and I genuinely understand why they feel the need (or in some cases must, depending on what part of the world you are in).

I saying that, I also have to deal with the issue from the other side of the equation. If the website is having a bad day, it’s usually because of something coming from one or both of these camps. As ever in life, a minority will insist on spoiling a good thing for everyone else.

After running the query, we had only missed 8 user accounts – credit to the moderation team. The oldest account was 26/09/2006 and my favourite that we probably should not have missed was “mrputin” from January 2013.

PowerShell – Convert DER Encoded Certificate file into a Base64 .cer

System Requirements:

  • Windows PowerShell

The Problem:

If you have a binary encoded .cer (certificate) file that you need to get into a Base64 format, you can either follow the advice and use OpenSSL to convert it or you can import it into the Windows Certificate Store and re-export it.

If you want to do it automatically, without having to download and install anything else, neither option is particularly appealing.

The Fix

You can use the following function to convert the binary encoded file into a Base64 ASCII encoded file

function Convert-CertificateBinaryToBase64 {
param( [string]$SourceFile, [string]$DestinationFile )
$cert = get-content "$SourceFile" -Encoding Byte
$content = @(
'-----BEGIN CERTIFICATE-----'
[System.Convert]::ToBase64String($cert, 'InsertLineBreaks')
'-----END CERTIFICATE-----'
)$content | Out-File -FilePath "$DestinationFile" -Encoding ASCII
}

Example usage

Convert the file, retaining the source file

Convert-CertificateBinaryToBase64 -Sourcefile 'C:\myBinary.cer' -DestinationFile 'C:\myBase64.cer'

Convert the binary file, overwriting it with the Base64 file

Convert-CertificateBinaryToBase64 -Sourcefile 'C:\myCertificate.cer' -DestinationFile 'C:\myCertificate.cer'

Emulating ServerXmlHttpRequest with XmlHttpRequest in JavaScript

System Requirements:

  • A web browser
  • JavaScript

The Problem:

So you want to use ServerXmlHttpRequest in JavaScript? Well, let me spoil your afternoon for you. You can’t. However that are a few things that – if you are desperate enough – might be able to help you overcome the problem.

This document offers a high level overview of my thinking on the problem.

More Info

XmlHttp is an extremely power tool. As a coding apparatus, it has brought about Asymmetric JavaScript And Xml (AJAX) which has redefined the web user interface paradigm. Yet Microsoft also created something else, something very similar, but a little bit different, the ServerXmlHttp object.
There are several key differences between the two:

XmlHttp ServerXmlHttp
Intended use Client -> Server Server -> Server
Server -> Client
Concurrent Connection Limit (per-domain, session*) Depends upon the browser
2 in legacy IE, raised to 6 in IE 9 or higher?++
None (RAM and socket availability are practical limits however)
5460 in MS XML 3.0
Concurrent Connection Limit (session*) 17++ None (RAM and socket availability are practical limits however)
5460 in MS XML 3.0
Works in JavaScript Yes No
Works in JScript Yes Yes
Works in VBScript Yes Yes (ASP / IE)
Works in COM Yes Yes
Supports Sessions Yes No
Supports Cookies Yes No
Supports Http Proxy discovery Yes No (use proxycfg.exe)
Supports Authentication Yes No
URL Caching Yes No
HTTP/1.1 Chunking Yes No
Works in ‘Offline’ mode Yes No
Supports Gopher Yes No
Supports FTP Yes No
Provider URLMon/WinInet WinHTTP
Availability MS XML 2.0 or higher
MS IE 5.0 or higher
MS XML 3.0 or higher
MS IE 5.01 or higher

* This number reflects the entire session. If you are loading in a web browser, this includes the page, each image loading concurrently on the page, all scripts being pulled in. Once they are all loaded (or timeout), XmlHttp can have access to the full number, but not before.
++ You can use browserscope to assess the connection limits for different browser versions
http://www.browserscope.org/?category=network&v=top

Why would you ever need to do this?

The reason is specifically related to performing tasks that take a very long time to complete without encountering a blocking I/O state on the web browser. In 2017, that means JavaScript and simply put, you cannot use ServerXmlHttpRequest in JavaScript and make your web application browser independent.

If you need to ask a server to calculate PI to 1,000,000 digits and then for scientific reasons need it to verify that 9 more times, get the results and compare the answers – and you want to do it asynchronously. You cannot do this with XmlHttp (especially if you don’t want your web browser to appear to crash to the end use).

If you want to have the browser appear to still be responsive while it waits for the 10 answers to come back, you need to find a different solution.

So what can you do if you require more connections than the client allows?

You are fairly limited here in what you can do as this is an intended design of XmlHttp and not a limitation – it is a good one too otherwise there would be a considerable security impact (DDOS) as well as performance implications for things such as battery life, CPU usage caps and the negative effects of JavaScript on the browsing public.

Make it IE only

Hard code your solution so that it only works in IE and make any necessary client changes to the IE configuration to make it work correctly.

Obviously, this isn’t a serious solution.

Modify the registry/browser configuration to increase the concurrent limit count

Again, obviously, this isn’t a serious solution.

Use multiple domains/sub-domains

Reduce the number of elements loading on the page from the same source (having a separate domain/sub-domain for images and include files is often stated as an aid here).

Use additional server side worker processes to implement

After some trial and error, I realised that the connection limits are in practice imposed against the HTTP connection session on a per-port basis. If you create multiple IIS work processes to receive and process requests from the same client, you can increase considerably the number of connections made to the same domain from a single client. But it does consume more server resources and it does require some “thread pool” (I use the term lightly) management logic be added to the client.

How do you do that? Bindings. For each additional socket and binding that you create (using the same domain name) you can double the number of concurrent per-domain connections. For example

Let’s connect to the non-existent web address http://www.myajaxdomain.com/ajax-api/calculatePi/1000000

That is actually shorthand for http://www.myajaxdomain.com:80/ajax-api/calculatePi/1000000.

If in IIS you add additional port binding for the EXACT SAME website and domain name for Port 81, 82, 83, 84 and 85 you will have added significantly more concurrent connections to the potential pool.

In this example, you can treat the port 80 instance as the reserved for the client (if a user clicks away in the browser, the page will respond immediately because all other traffic is dead-locked on 81, 82, 83, 84 or 85.

With the socket connections to 81-85 you create a pseudo-thread pool and allocate workers in the client to each of the pool entries. You then need to register the correct call back function in JavaScript to allocate and deallocate each “thread” on the client.

Your web browser can now wait for the 10 results to come back, sending the next request as a new “thread” becomes available on the server.

Example “thread pool” call to /ajax-api/calculatePi/1000000

http://www.myajaxdomain.com:80 Reserved
http://www.myajaxdomain.com:81 Available = False
XmlHttpStatus = 3
Callback = myCallbackFunction()
http://www.myajaxdomain.com:82 Available = False
XmlHttpStatus = 1
Callback = myCallbackFunction()
http://www.myajaxdomain.com:83 Available = True
XmlHttpStatus = null
Callback = null
http://www.myajaxdomain.com:84 Available = True
XmlHttpStatus = null
Callback = null
http://www.myajaxdomain.com:85 Available = True
XmlHttpStatus = null
Callback = null

You will need to implement CORS headers in order to use this workaround successfully, however it does work although you will NOT be able to share Session information between each socket connection (unless you implement session awareness / sharing in your application layer).

I am going to compare this idea (which I have used in anger and has saved me after much thought and days of time trying to find a solution) to that of creating a thread pool. It is in effect a logical creation of a client side thread-pool. It requires a lot of work in the client code to create and manage the “threads” and it needs a lot of checking to ensure that you do not wind up dead-locking your browser session. Consequently, this approach is not for the faint hearted.

That said, it keeps the entire code base client side and relieves pressures on the creation of state awareness and the server side session management subsystem. It also reduces the risk to your IIS application because you are (or can) use additional IIS worker processes in isolation. You can even create multiple IIS website whose bindings are completely isolated from each other and share the code base.

The disadvantage is that clients will need to have firewall access to each of the new socket addresses that you create, which can be a problem for a publicly accessible web application.

Implement a Server Side Queue

The final way that you can overcome the limitation is to create a traceable queue and perform all of the operations in the server session process. This basically moves the creation of the “thread pool” from the client, onto the server, allowing the server to process the logic using ServerXmlHttpRequest, bypassing any limits.

This would look something akin to the following:

  1. Client contacts web services and requests the function for calculating Pi to 1000000 places 10 times http://www.myajaxdomain.com/ajax-api/calculatePi/1000000/10.
  2. Server creates a worker pool with 10 slots in the session, generates a unique transaction ID (abz678), starts them running (this is where the black magic needs to occur in your solution) and sends the transaction ID back to the client
  3. The client disconnects
  4. Every n seconds the client contacts the server, this time sending with it the transaction ID and the entry that it wants the result for http://www.myajaxdomain.com/ajax-api/getPiCalulationResult/abz678/5. The server looks up the results and returns the status to the client e.g.
    {
    Transaction: ‘abz678’,
    Index: 5,
    Status: ‘Complete’,
    Result: ‘3.14159265359…………’
    }or{
    Transaction: ‘abz678’,
    Index: 5,
    Status: ‘Processing’,
    Result: null
    }

    or

    {
    Transaction: ‘abz678’,
    Index: 5,
    Status: ‘Failed,
    Result: ‘Error -123456 occurred’
    }

  5. Once the client has received a Complete/Failed return for each of the 10 iterations, it will terminate activity and it will be up to you and the server how it cleans-up.

This is the most complicated method to implement. You still have all of the “pool” headaches to contents with, just in a different place. Additionally, you have to persist processing state in your web application and you need to ensure that the server is processing everything wholly asynchronously, something that may actually require additional code to be present on the web server to provide CRON/scheduler service support, or a fully blown queue pick-up service to be available.

You also need to convert your system into what is effectively a message passing system which isn’t an easy task to perform if you are maintaining or converting a system.

The advantage of this approach is that it is the best way to undertake the task. The disadvantage however is that you can quickly get into an infinite spiral of abstraction that ultimately pushes the point where the work is performed in your programme to increasingly lower levels.

At that point, you’ve probably forgotten you were trying to calculate Pi in the first place.