Archive for the ‘Uncategorized’ Category

IP Camera Alternatives to Dahua, Hikvision, or Huawei?

November 17, 2019 Leave a comment

Where’s my IP camera made?

Starting August 13th, 2019 the National Defense Authorization Act banned the US government from procuring cameras from Dahua, Hikvision, and Huawei.

The upside of this attention is it brought light to the fact that many security camera brands had popped up over the years with variable quality with poor to no ongoing firmware updates for discovered security issues. Mostly as a consumer it made searching for “network cameras” in Amazon or Newegg a terrible process as it was difficult to filter out quality brands from dead-end and potentially security-compromised ones. That many of these cameras were just re-labeled cameras produced by Dahua or Hikvision using SoCs by Huawei is a different matter.

However, if you’re looking for cameras not associated with Dahua, Hikvision, or Huawei, try the manufacturers/brands below. I learned about these researching into the brands I currently use like Hikvision, Reolink, TRENDnet, etc..

Brand Amazon Search Made By or SoC Made By NDAA Compliant? Country of Origin
American Dynamics Amazon Made by Tyco Security Products now Johnson Controls Yes (some cameras) Ireland
Arecont Vision Amazon OEM Yes USA
Avigilon Amazon Made by Motorola Yes Canada
Axis Communications Amazon Soc is Axis-developed ARTPEC Yes (though SMB cameras use Huawei HiSilicon) Sweden
Hanwha (i.e. Samsung WiseNet) Amazon SoC is Hanwha-developed Wisenet chips, e.g. Wisenet 5 Yes South Korea
Honeywell (not Performance Series) Amazon Made by Vivotek Yes
Taiwain country of origin
Illustra Amazon Made By Tyco Security Products now Johnson Controls Yes (some cameras) Ireland
Pelco Amazon ? Yes USA
Ubiquiti Amazon Soc is Ambarella, USA Yes USA
Vivotek Amazon SoC made by VATICS, spun out of Vivotek in 2007 Yes (some cameras) Taiwan

If you’re looking for a larger list of cameras and brands including those made by or with parts from Dahua, Hikvision, or Huawei, see this Google Sheets workbook here.

Dahua-made Cameras

Dahua OEM Directory 6 NOV 2019

Hikvision-made Cameras


Categories: Uncategorized Resinos Root SSH logon with Putty

August 20, 2018 Leave a comment

If you’re running your Home Assistant installation using, SSH interaction with Home Assistant is usually through port 22. This connects to the Docker guest image running Home Assistant within the HassOS/ResinOS hypervisor.

Interaction with the physical host (e.g. your Raspberry Pi) requires connecting to SSH port 22222 which is configured by default to only accept SSH connections with a public/private key pair. Official instructions for setting up this connection can be found here. For additional settings confirmation, continue reading.


  1. Retrieve your Home Assistant’s SD card and mount it locally. Most partitions won’t be natively readable in Windows which is fine as you will only need to write to the SD card’s “boot” partition (the only one that should be readable in Windows). Assuming this is mounted as the “G:\” drive.
  2. Generate a new public/private key pair with Putty Key Generator, puttygen.exe
    1. Launch puttygen
    2. Default parameters of “RSA” and key size of “2048” will work.
    3. Enter in a key passphrase to be used whenever the private key is unlocked.
    4. Click “Generate” and follow on-screen instructions to generate randomness until the key generation process is completed.
    5. 20180820 - SSH Hassio ResinOS
    6. Click “Save private key” and save it in a location accessible wherever you will want to SSH from.
    7. You can also click “save public key” to save that file in the same location as your private key, though you will not use it in that format.
    8. Copy and paste the “OpenSSH authorized_keys file” section circled above into a new text file at path “G:\authorized_keys”
      1. Make sure there is no “.txt” included in the file name.
      2. The contents of the file should be one line and look like this:
      3. ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAivEKRSB7gTs8DoY36n4tK+vUvwNHzUkZthTawQH/LRfkn/g+0LfSQilrTm1fKaW4Te0mbtF01L0LYZO5kdkaI/BaBTHWvTmO049OWYAbVSROAXgdtm/UrlcWm2Z3f1vIfPVRxHrAL2Qw3RJ/e0fUIqzgwKrEG0HWGgWbRZZhbiPEGPkvx5F78jIEAE4ZkIfOFEYGRgOdG5om3HdfBY6pytEcLgJW2hFTgrWJ+YnAal/OGhVCZlqmxX8kBlsXKXGFVzkMMVX0/p68FP0L93c1bozIt0nBiWovasglIEn8h+O3Wz93Mnt8HhcWwj5NmhKoDX4AFyr53t7lIP9tV/FydQ== rsa-key-20180820
  3. “Eject” the SD card, place it back into your device, and boot it up again.
  4. Launch Putty to configure your “ Host” profile
    1. Session >
      1. Host Name (or IP address): {same that you use normally for Home Assistant}
      2. Port: 22222
    2. Connection >
      1. Data >
        1. Auto-login username: root
      2. SSH >
        1. Private key file for authentication: {path to your private key above, e.g. “c:\users\foo\desktop\ssh_private”}
  5. In Putty, go back to the Session category node and click “Save” on your profile.
    1. Now click “Open”
    2. You should see the following in Putty:
      1. Using username “root”.
        Authenticating with public key “rsa-key-20180819”
        Passphrase for key “rsa-key-20180819”:
    3. Enter your private key’s passphrase to continue.
  6. At this point you are connected to your’s Host environment. From here you can perform a different set of administrative tasks including:
    1. Docker list images: docker ps
    2. ResinOS logs: docker logs resin_supervisor
    3. HassOS logs: docker logs hassos_supervisor
    4. Reload udev rules: udevadm control –reload-rules
    5. Re-add devices: udevadm trigger
    6. System log entries (e.g. after inserting a new USB device into your Raspberry Pi): dmesg

Troubleshooting the various errors you might see trying to connect from Putty:

      1. Disconnected: No supported authentication methods available (server sent: publickey)
        1. Solution1: You’re trying to use password authentication only. The host is configured to only allow publickey authentication which requires saving your private key to the’s boot partition in file “authorized_keys” and using the private key in your SSH client. Use instructions above or here to configure a key pair for use.
        2. Solution2: Your installation might still be using the ResinOS host environment and you crossed over in these instructions  which resulted in using username “root@hassio.local”.  Use “root” as your username instead.
        3. Solution3: Your installation is using HassOS and you’re trying to login with user “root”. Use “root@hassio.local” instead.
      2. Unable to use key file “\\foo\bar\hassio_private_openssh.ppk” (OpenSSH SSH-2 private key (new format))
        1. In PuttyGen you might have clicked Conversions > “Export OpenSSH key”, this is a different format. Use the key from Puttygen’s “Save private key” menu operation or button.


Categories: Uncategorized

Restart Hyper-V Guests Automatically When iSCSI Volume Goes Offline

August 13, 2018 Leave a comment

I have a couple of Hyper-V Server 2016 hosts with guest VHDX’s stored on some Synology-based iSCSI volumes. I like this setup as I don’t have to worry about maintaining reliable storage on the hosts as I’ve already invested in that with my Synology NAS and my iSCSI volumes are more than fast enough for my usage.

When I first got the guests running on the iSCSI volume I would have bi-monthly issues where the guests would go “Critical” and would refuse to boot unless I restarted the Hyper-V host. A little while later I realized my update installations and reboots of the NAS was taking the iSCSI volumes offline which Hyper-V didn’t handle well as it would get stuck “Reconnecting.” Additionally, having the iSCSI volume go out underneath running guests was a reliable way to have the guests BSOD.

Fixes for this were:

1) Reconfigure the iSCSI volumes to auto-reconnect once the Synology NAS had rebooted.

2) Use WMI event binding to handle the iScsiPrt System events for the volume going offline and then reconnecting successfully:

  1. EventID=20, “Connection to the target was lost. The initiator will attempt to retry the connection.
  2. EventId=34, “A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name.

Part 1 was mostly just reconfiguring the iSCSI connection again and ensuring that my iSCSI target was marked as a “favorite”. I still can’t explain why it would get stuck on “Reconnecting”. As of now, with all updates to Hyper-V 2016 and Synology installed, everything is working well.

Part 2 was trickier. I knew from looking at the Event Log that the two iScsiPrt events reliably occurred during the iSCSI outages. Searching around online turned up WMI eventing as a route to trigger an action after an event was logged and doing so with “permanent” events would mean they would survive host reboots. Here’s what I ended up with including my path names for reference:


# See existing event filters, consumers, and bindings
# Get-WmiObject -Namespace root\Subscription -Class __EventFilter
# Get-WmiObject -Namespace root\Subscription -Class __EventConsumer
# Get-WmiObject -Namespace root\Subscription -Class __FilterToConsumerBinding

# Configure Down Events

$FilterNameDown = "iSCSI_TargetConnection_EventFilter_Down"
$ExistingFilterDown = Get-WmiObject -Namespace root\Subscription -Class __EventFilter -Filter "name='$FilterNameDown'"
if ($ExistingFilterDown -ne $null)
    $ExistingFilterDown | Remove-WmiObject -Verbose
    $ExistingFilterDown = $null
    Write-Host "Deleted existing DOWN event filter."

$ConsumerNameDown = "iSCSI_TargetConnection_EventConsumer_Down"
$ExistingCommandDown = Get-WmiObject -Namespace root\Subscription -Class CommandLineEventConsumer -Filter "name='$ConsumerNameDown'"
if ($ExistingCommandDown -ne $null)
    $ExistingCommandDown | Remove-WmiObject -Verbose
    $ExistingCommandDown = $null
    Write-Host "Deleted existing DOWN event command."

$ExistingBindingDown = Get-WMIObject -Namespace root\Subscription -Class __FilterToConsumerBinding -Filter "__Path LIKE '%$ConsumerNameDown%'"
if ($ExistingBindingDown -ne $null)
    $ExistingBindingDown | Remove-WmiObject -Verbose
    $ExistingBindingDown = $null
    Write-Host "Deleted existing DOWN event binding."

$QueryDown = "SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' AND TargetInstance.LogFile='System' AND TargetInstance.SourceName='iScsiPrt' AND TargetInstance.EventCode=20"
$WMIEventFilterDown = Set-WmiInstance -Class __EventFilter -Namespace "root\Subscription" -Arguments @{Name=$FilterNameDown;EventNameSpace="root\cimv2";QueryLanguage="WQL";Query=$QueryDown}
Write-Host "Created new DOWN event Filter. " $WMIEventFilterDown.Path

$CommandLineTemplateDown = "powershell.exe -command `". 'c:\bin\iSCSI_OnIscsiChange.ps1' 'Down' '%TargetInstance.Message%'`""
$ExecutablePath = "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe"
$WMIEventConsumerDown = Set-WmiInstance -Class CommandLineEventConsumer -Namespace "root\subscription" `
    -Arguments @{Name=$ConsumerNameDown;CommandLineTemplate=$CommandLineTemplateDown;ExecutablePath=$ExecutablePath }
Write-Host "Created new DOWN event consumer."

$result = Set-WmiInstance -Class __FilterToConsumerBinding -Namespace "root\subscription" -Arguments @{Filter=$WMIEventFilterDown;Consumer=$WMIEventConsumerDown}
Write-Host "Created new DOWN event binding."

# Configure Up Events

$FilterNameUp = "iSCSI_TargetConnection_EventFilter_Up"
$ExistingFilterUp = Get-WmiObject -Namespace root\Subscription -Class __EventFilter -Filter "name='$FilterNameUp'"
if ($ExistingFilterUp -ne $null)
    $ExistingFilterUp | Remove-WmiObject -Verbose
    $ExistingFilterUp = $null
    Write-Host "Deleted existing UP event filter."

$ConsumeNameUp = "iSCSI_TargetConnection_EventConsumer_Up"
$ExistingCommandUp = Get-WmiObject -Namespace root\Subscription -Class CommandLineEventConsumer -Filter "name='$ConsumeNameUp'"
if ($ExistingCommandUp -ne $null)
    $ExistingCommandUp | Remove-WmiObject -Verbose
    $ExistingCommandUp = $null
    Write-Host "Deleted existing UP event command."

$ExistingBindingUp = Get-WMIObject -Namespace root\Subscription -Class __FilterToConsumerBinding -Filter "__Path LIKE '%$ConsumeNameUp%'"
if ($ExistingBindingUp -ne $null)
    $ExistingBindingUp | Remove-WmiObject -Verbose
    $ExistingBindingUp = $null
    Write-Host "Deleted existing UP event binding."

$QueryUp = "SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' AND TargetInstance.LogFile='System' AND TargetInstance.SourceName='iScsiPrt' AND TargetInstance.EventCode=34"
$WMIEventFilterUp = Set-WmiInstance -Class __EventFilter -Namespace "root\Subscription" -Arguments @{Name=$FilterNameUp;EventNameSpace="root\cimv2";QueryLanguage="WQL";Query=$QueryUp}
Write-Host "Created new UP event Filter. " $WMIEventFilterUp.Path

#$CommandLineTemplateUp = 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Bypass -File c:\bin\iSCSI_OnIscsiChange.ps1 -state Up -eventMessage %TargetInstance.Message%'
$CommandLineTemplateUp = "powershell.exe -command `". 'c:\bin\iSCSI_OnIscsiChange.ps1' 'Up' '%TargetInstance.Message%'`""
$ExecutablePath = "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe"
$WMIEventConsumerUp = Set-WmiInstance -Class CommandLineEventConsumer -Namespace "root\subscription" `
    -Arguments @{Name=$ConsumeNameUp;CommandLineTemplate=$CommandLineTemplateUp;ExecutablePath=$ExecutablePath }
Write-Host "Created new UP event consumer."

$result = Set-WmiInstance -Class __FilterToConsumerBinding -Namespace "root\subscription" -Arguments @{Filter=$WMIEventFilterUp;Consumer=$WMIEventConsumerUp}
Write-Host "Created new UP event binding."

Write-Host Done.

Pay attention to the namespaces used, you mostly want “root\Subscription” but the event filter’s EventNamespace must be “root\cimv2”.

Then have this file sitting on disk being referenced by the WMI EventConsumer:


$state = $args[0]
$eventMessage = $args[1]

Function Write-Log {
    [Parameter(Mandatory=$False)] [ValidateSet("INFO","WARN","ERROR","FATAL","DEBUG")] [String] $Level = "INFO",
    [Parameter(Mandatory=$True)] [string] $Message,
    [Parameter(Mandatory=$False)] [string] $logfile

    $Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")
    $Line = "$Stamp $Level $Message"
    If($Logfile) {
        Write-Output $Line
        Add-Content $logfile -Value $Line
    Else {
        Write-Output $Line

$Logfile = "C:\bin\iSCSI_EventLog.log"
Write-Log DEBUG "New iSCSI state: $state" $Logfile
Write-Log DEBUG "Event message: $eventMessage" $Logfile

if ($state -eq "Down") 
    Write-Log INFO "iSCSI reported 'connection to target was lost', turning virtual machines off!" $Logfile
    Get-VM | where {$_.State -eq 'Running'} | Tee-Object -Append -FilePath $Logfile | Stop-VM -TurnOff -Force | Tee-Object -Append -FilePath $Logfile
    Write-Log INFO "iSCSI reported Initiator reconnected to target, turning virtual machines back on" $Logfile
    Get-VM | Tee-Object -Append -FilePath $Logfile | Start-VM | Tee-Object -Append -FilePath $Logfile

I couldn’t get named parameters working with my CommandLineEventConsumer, just positional hence “$args” usage above.


  1. In PowerShell, run:
    1. PS C:\> .\bin\iSCSI_Monitor.ps1

Testing Options

Consider using “-WhatIf” parameters for Start-VM and Stop-VM in iSCSI_OnIscsiChange.ps1 unless you want to actually Stop/Start your VMs.

Confirm actions in log file “C:\bin\iSCSI_EventLog.log”.

Option A: Create fake events with “eventcreate”

  1. CMD> eventcreate /T INFORMATION /L SYSTEM /ID 20 /SO Test /D Test
  2. Note:
    1. You will need to remove the “AND TargetInstance.SourceName=’iScsiPrt’” filter conditions as the events will show up as Source “Test”

Option B: Disable/Enable the Targets from within Synology’s “iSCSI Manager” app.

This is faster than actually rebooting your Synology NAS if you’re okay with taking your iSCSI Targets offline which means also actually turning off your guest VMs.

  1. Launch “iSCSI Manager” in your Synology Dashboard
  2. 20180814 - Synology iSCSI Manager
  3. Select your Target you wish to test taking offline, click “Disable”, then click “Yes” to acknowledge the confirmation dialog.
  4. Once you’ve confirmed in the log file the event was received then click “Enable” to turn the Target back on.  State should eventually go from “online” to “connected” once your Hyper-V host connects again.
Categories: Uncategorized

Windows Server 2016 Hyper-V “password is not correct” to Synology share

July 17, 2017 Leave a comment

I could not get Windows Server 2016 Hyper-V (10.0.14393) in a WORKGROUP to authenticate to Synology SMB share if the Synology unit (DSM 6.1.3-15152) is set to maximum SMB version 3.  I kept getting error: “System error 86 has occurred. The specified network password is not correct.” (86 == ERROR_INVALID_PASSWORD), on the Hyper-V system.  I retried typing the username and password, many times, all with the same error even though I knew the username and password were okay because I could use the same share from a Windows 7 and a Windows 10 machine. I never saw any errors in Synology’s log or in Windows event viewer about this.

I had to go back to SMB 2.0 on Synology– in DSM, Control panel > File Services > SMB > Advanced Settings > “Maximum SMB” set to SMB2.  Then Windows Server 2016 Hyper-V Core would connect to the Synology share.

Categories: Uncategorized

“No valid certificates were found on this smart card”

June 30, 2015 Leave a comment

At work we use smart cards for TFA and largely for accessing company resources remotely.  I’m currently using a Gemalto .NET smart card with an OMNIKEY Cardman 6121—a SIM-sized SC plugged into a USB dongle which is more convenient than the older full-sized SC and wired Omnikey 3021 used previously.

For years this setup has been fine.  Connect to work from home, when certificates expire renew them, and when the card runs out of space delete the expired certs.

However a few months ago I started seeing the following error instead of getting prompted for my PIN:


No valid certificates were found on this smart card. Please try another smart card or contact your administrator

The same smart card still worked on my laptop and on other PCs so it wasn’t a matter of a expired certs.  But complicating matters was that my home PC’s TPM, of which I had stored virtual smart cards, had those same certs expire roughly around the same time and the error “No valid certificates” can be interpreted as (I feel) “we found certs, but none of them are valid” so I spent some extra cycles making sure all my certificates were updated and valid instead of finding the real problem.  (And between then and now I also updated the same system to Windows 10 which would explain the difference in screenshots)

After putting the problem aside for a while, I eventually noticed my working laptop was using the Gemalto mini-driver version and the system which didn’t work was using version  I installed the Gemalto version from the Windows Driver Catalog but whenever I selected the driver for the card it would update back to and because I was busy with other things at the time I didn’t really press on it.


 .NET Gemalto search on the Windows Driver Catalog (to install, download a cab locally, unpack it, and right-click the .inf and select “Install”)

However when I went to enumerate the certificates on the card via command `certutil –scinfo > scinfo.txt` instead of getting a PIN prompt the certificates I got this error instead:


The smart card cannot perform the requested operation or the operation requires a different smart card

and the scinfo.txt output file indicated failures reading the key container when the same operation succeeded on my laptop:

================ Certificate 0 ================
--- Reader: OMNIKEY CardMan 6121 0
--- Card: Axalto Cryptoflex .NET
Provider = Microsoft Base Smart Card Crypto Provider
Key Container = (null) [Default Container]
Cannot open the AT_SIGNATURE key for reader: OMNIKEY CardMan 6121 0 …

Knowing the key container is read/written to by the mini-driver, failure to read then still hinted at some incompatibility with the particular Gemalto mini-driver version I had installed.

So I went back to Device Management (devmgmt.msc) and selected the Gemalto IDPrime .NET Smart Card node under the Smart cards node.  From there I selected “Update Driver…” > “Browse my computer for driver software” > “Let me pick from a list of device drivers on my computer” and chose the previously-installed driver version:


Now when authenticating I can see the smart card LED blinking finally indicating activity and eventually the familiar PIN prompt:


Hooray, I can work from home again.

Categories: Uncategorized

Enable touch on HP 2740p with Windows 8.1

January 25, 2014 5 comments

After upgrading my HP 2740p from Windows 8 to Windows 8.1 I lost touch.  The Wacom pen input still worked.  I re-installed PenTablet_533-3.exe and Wacom Digitizer Driver – sp52863.exe (in that order) but touch wasn’t restored.

It wasn’t until I opened Window’s Control Panel, searched for Touch, launch the Touch Settings app, and run through the 16-point calibration was touch restored.


Touch Settings v3.0.7-24

Touch Settings (WTouchCPL.exe) on my system is installed to C:\Program Files\WTouch

Categories: Uncategorized Tags: ,