Archive

Author Archive

Radon Sensor Review: Airthings Wave Plus versus RadonEye RD200

November 28, 2020 Leave a comment

This is a continuation of this post.

Here we compare the radon readings from the Airthings Wave Plus and the RadonEye RD200 devices. In total data from two Wave Plus sensors and one RD200 is used. There are no calibrated reference readings to compare against so conclusions are drawn based on cross-checking and environmental factors that can be controlled like adding fresh air and device locations. Conclusion: the RD200 device is fast and obtains good accuracy within an hour whereas the Wave Plus devices as described require >7days for accuracy and in practice produces readings that are confusing at times.

Beyond active radon mitigation getting fresh outdoor air inside is effective at reducing indoor radon levels. We live in a national Zone3 radon region where <2.0 pCi/L levels are common but can range pretty significantly depending on local geography and house construction and our immediate region is not without risk. When radon readings might not be high enough to need active radon mitigation (>=4.0 pCi/L) or even suggest mitigation (2.0 – 4.0 pCi/L) there’s still health value in keeping them as close to outdoor air (0.4 pCi/L) as possible as there is no known “safe” level of radon.

Our 1974 home is built into a hill with a daylight basement. When we moved into this hour earlier this year we brought with us our Airthings Wave Plus sensor and started seeing slightly higher radon readings of 0.5 – 2.3 (average 0.8) pCi/L on the upper sleeping floor. Our prior drafty slab-on-grade house with no basement saw radon levels on the order of 0.0 – 1.6 (average 0.6) pCi/L. While our newer house didn’t have levels high enough to warrant mitigation, having a young family and spending significant time indoors meant it was worth to double-check the data. At this point we acquired a second Airthings Wave Plus to get dedicated readings from the basement floor. While it showed slightly higher readings than the upper floor I was finding the radon readings from both Wave Plus devices to be non-responsive to the sometimes significant amount of fresh air being brought in which was suspicious given that’s a known method of reducing radon. At this point I started to question the Airthings Wave Plus radon readings.

Here we see many times where closing windows and turning off fresh outdoor air causes the Wave Plus radon readings to strangely plummet and vice versa where opening windows and turning on fans causes the readings to skyrocket.

At this point I looked around for other data-logging radon detectors and found the RadonEye RD200. It uses the same radon sensor as their RadonEye Pro professional device (which comes with certifications) and both are factory-calibrated and tested for accuracy of <10% at 10pCi/L in less than 60 minutes where as the Airthings Wave Plus is noted as only being +-10% at 5.4pCi/L in 7 days or +-5% at 5.4pCi/L in 2 months. That is the RD200 is significantly faster than the Wave Plus for radon measurements.

As soon as we purchased a RadonEye RD200 its readings made much more sense. Windows being opened and fans being turned on to bring in fresh air caused radon levels to drop and closing up windows correspondingly allowed them to rise back up.

Opening a window and turning on a fan causes fresh air to dilute stale air causing radon levels to drop. Closing windows and turning off air cycling causes radon levels to rise.

At this point, with a responsive radon measurement device on hand I paired it with the basement Wave Plus and moved them about the house to attempt to cross check their radon readings and to try to determine any patterns.

Devices were kept in the same room’s location from 24 hours to up to 4-days

LocationRadonEye RD200Airthings Wave Plus
Basement office0.08 – 1.84 (0.73 average)0.32 – 4.73 (1.09 average)
Basement bathroom0.14 – 1.28 (0.75 average)0.95 – 1.59 (1.27 average)
Basement den0.14 – 1.66 (0.85 average)0.24 – 1.11 (0.62 average)
Basement garage0.05 – 0.43 (0.26 average)0.46 – 0.59 (0.51 average)
Upstairs bedroom0.14 – 1.54 (0.63 average)0.14 – 1.24 (0.66 average)
Devices were side-by-side and readings were from same timespan.

Some conclusions from the readings:

  • Opening a window and turning on the fan to bring in fresh air into a room is very effective at reducing radon levels. When this is done open another window on the other side of the floor to allow air to vent out.
  • Upstairs bedroom averages 0.5 – 0.75 pCi/L lower than corresponding basement readings
  • Airthings Wave Plus radon measurements move in directions opposite to expectations. Not a good device for short-term determination if radon mitigation is working, possibly adequate for medium-term analysis on the order of days/weeks.
  • Airthings Wave Plus can randomly record spikes in the data. When the spike in the chart below occurred I received a notification on my phone for exceedingly high radon levels at a pCi/L level higher than the data recorded. I recall it being a very suspicious value like 100+ but it didn’t occur to me to take a screenshot of it.
The recorded radon levels from the RD200 (green line) are much more responsive to fresh air mitigation and whether windows are opened or closed

Both devices detect radon levels independently of other environmental factors. Radon readings from the Wave Plus and RD200 devices were evaluated against other environmental factors to check if they were being biased. Initially it seemed like the RD200 was responding to temperature (a drop in temperature also showed a drop in radon) but this was checked by leaving the device in a temperature-stable location and there the readings still fluctuated based on air freshness. In the end both device’s radon readings appear to be largely determined based on how much stale air there is and existing sources of radon.

Radon readings compared to Temperature (F) and Humidity (%)
Radon readings compared to CO2
Radon readings compared to VOC

Two Airthings Wave Plus devices will read potentially very different radon values. Airthings only notes +-10% accuracy to occur within 7 days. In most of my household measurements between the two devices located on different floors the values have generally been no where close to each other. But by the end of the fourth day after keeping the devices next to each other we do see the values to start to move closer to each other so it’s possible with additional time they’d align. However it’s clear that Airthings Wave Plus radon sensors do not have the monitoring resolution to be responsive unlike the RadonEye. Being responsive is very helpful when testing different mitigation strategies.

Two Airthings Wave Plus sensors side by side read different radon values. Possibly corrects itself when left to calibrate after at least 7 days.

Categories: Uncategorized

Airthings Wave Plus vs RadonEye RD200

November 16, 2020 1 comment

We have a couple of Airthings Wave Plus devices to keep track home air quality metrics like CO2, TVOC (Total Volatile Organic Compounds), and radon levels. While the non-radon readings like CO2 and temperature have been accurate and responsive, over time it was noticed that the radon readings did follow environmental changes like open windows and air cycles as quickly, or at all. Instead the radon readings seemed to raise and drop at their own pace. I chalked it up to stronger outdoor effects like rain but given I was starting to try to track down potential radon sources (like the effects of our open basement floor shower drain) I was interested in both accurate and responsive readings to help validate any fixes.

The initial sensor location is a daylight basement office (11’x13′) with a single small 1’x4′ window with a 1’x2′ opening. There’s a small strong 9″ fan (Honeywell HT-800) inside the window bay that pulls in enough fresh air to easily turn over the room’s air several times over the the course of the day. The entire basement floor is a daylight basement on a steep hill where the backside of the house is fully below ground and the front side of the house is fully above ground. We live in a Zone3 radon region (where average indoor levels are <=2 pCi/L) but more locally the risk is moderate. Both of the sensors are on a small shelf near each other about 4′ from the floor and 1′ away from the wall.

Enter a RadonEye RD200 purchased to compare against the Wave Plus. It touts itself as being “>10x more sensitive and accurate than other home radon detectors“. The RadonEye takes readings every 10 minutes though the data export from their app only yields data in 1-hour increments with each data point being a 60 minute moving average. Neither the Wave Plus or the RadonEye are certified, but the RadonEye Pro is “AARST-NRPP and NRSB” certified and both the Pro and non-pro version tout very similar features, including the sampling rate and touted accuracy, so there’s a non-trivial chance that the non-pro version is as good as the Pro version just without the certification backing it up.

With only a couple of day’s worth of results to compare against the Wave Plus so far the RD200 seems to be much more responsive and seems to track well against changes in the environment. For the last 48 hours here is an Excel rendering of the data exported from RadonEye’s app and the same timeframe via Airthing’s https://dashboard.airthings.com:

Summary table for that same 48 hour period:

Low pCi/LHigh pCi/LAverage pCi/L
Airthings Wave Plus0.32.21.1
RadonEye RD20000.171.450.7

So far the numbers are in roughly the same ballpark but the Airthings reads about 41-55% higher at any given time and seems to be moving with a 12-hour moving average or… something. For instance at 5pm today 11/16 the RadonEye noted levels of 0.17pCi/L, after a full work-day’s worth of a fan pulling in fresh air. However Airthings recorded it at the highest level for the 48hour period or 2.2pCi/L.

To be updated after a full week’s worth of data…

Categories: Uncategorized

IP Camera Alternatives to Dahua, Hikvision, or Huawei?

November 17, 2019 Leave a comment

Where’s my IP camera made?

Starting August 13th, 2019 the National Defense Authorization Act banned the US government from procuring cameras from Dahua, Hikvision, and Huawei.

The upside of this attention is it brought light to the fact that many security camera brands had popped up over the years with variable quality with poor to no ongoing firmware updates for discovered security issues. Mostly as a consumer it made searching for “network cameras” in Amazon or Newegg a terrible process as it was difficult to filter out quality brands from dead-end and potentially security-compromised ones. That many of these cameras were just re-labeled cameras produced by Dahua or Hikvision using SoCs by Huawei is a different matter.

However, if you’re looking for cameras not associated with Dahua, Hikvision, or Huawei, try the manufacturers/brands below. I learned about these researching into the brands I currently use like Hikvision, Reolink, TRENDnet, etc..

Brand Amazon Search Made By or SoC Made By NDAA Compliant? Country of Origin
American Dynamics Amazon Made by Tyco Security Products now Johnson Controls Yes (some cameras) Ireland
Arecont Vision Amazon OEM Yes USA
Avigilon Amazon Made by Motorola Yes Canada
Axis Communications Amazon Soc is Axis-developed ARTPEC Yes (though SMB cameras use Huawei HiSilicon) Sweden
Hanwha (i.e. Samsung WiseNet) Amazon SoC is Hanwha-developed Wisenet chips, e.g. Wisenet 5 Yes South Korea
Honeywell (not Performance Series) Amazon Made by Vivotek Yes
Taiwain country of origin
Illustra Amazon Made By Tyco Security Products now Johnson Controls Yes (some cameras) Ireland
Pelco Amazon ? Yes USA
Ubiquiti Amazon Soc is Ambarella, USA Yes USA
Vivotek Amazon SoC made by VATICS, spun out of Vivotek in 2007 Yes (some cameras) Taiwan

If you’re looking for a larger list of cameras and brands including those made by or with parts from Dahua, Hikvision, or Huawei, see this Google Sheets workbook here.

Dahua-made Cameras

Dahua OEM Directory 6 NOV 2019
via ipvm.com

Hikvision-made Cameras

via ipvm.com

Categories: Uncategorized

Hass.io Resinos Root SSH logon with Putty

August 20, 2018 Leave a comment

If you’re running your Home Assistant installation using Hass.io, SSH interaction with Home Assistant is usually through port 22. This connects to the Docker guest image running Home Assistant within the HassOS/ResinOS hypervisor.

Interaction with the physical host (e.g. your Raspberry Pi) requires connecting to SSH port 22222 which is configured by default to only accept SSH connections with a public/private key pair. Official instructions for setting up this connection can be found here. For additional settings confirmation, continue reading.

 

  1. Retrieve your Home Assistant’s SD card and mount it locally. Most partitions won’t be natively readable in Windows which is fine as you will only need to write to the SD card’s “boot” partition (the only one that should be readable in Windows). Assuming this is mounted as the “G:\” drive.
  2. Generate a new public/private key pair with Putty Key Generator, puttygen.exe
    1. Launch puttygen
    2. Default parameters of “RSA” and key size of “2048” will work.
    3. Enter in a key passphrase to be used whenever the private key is unlocked.
    4. Click “Generate” and follow on-screen instructions to generate randomness until the key generation process is completed.
    5. 20180820 - SSH Hassio ResinOS
    6. Click “Save private key” and save it in a location accessible wherever you will want to SSH from.
    7. You can also click “save public key” to save that file in the same location as your private key, though you will not use it in that format.
    8. Copy and paste the “OpenSSH authorized_keys file” section circled above into a new text file at path “G:\authorized_keys”
      1. Make sure there is no “.txt” included in the file name.
      2. The contents of the file should be one line and look like this:
      3. ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAivEKRSB7gTs8DoY36n4tK+vUvwNHzUkZthTawQH/LRfkn/g+0LfSQilrTm1fKaW4Te0mbtF01L0LYZO5kdkaI/BaBTHWvTmO049OWYAbVSROAXgdtm/UrlcWm2Z3f1vIfPVRxHrAL2Qw3RJ/e0fUIqzgwKrEG0HWGgWbRZZhbiPEGPkvx5F78jIEAE4ZkIfOFEYGRgOdG5om3HdfBY6pytEcLgJW2hFTgrWJ+YnAal/OGhVCZlqmxX8kBlsXKXGFVzkMMVX0/p68FP0L93c1bozIt0nBiWovasglIEn8h+O3Wz93Mnt8HhcWwj5NmhKoDX4AFyr53t7lIP9tV/FydQ== rsa-key-20180820
  3. “Eject” the SD card, place it back into your Hass.io device, and boot it up again.
  4. Launch Putty to configure your “Hass.io Host” profile
    1. Session >
      1. Host Name (or IP address): {same that you use normally for Home Assistant}
      2. Port: 22222
    2. Connection >
      1. Data >
        1. Auto-login username: root
      2. SSH >
        1. Private key file for authentication: {path to your private key above, e.g. “c:\users\foo\desktop\ssh_private”}
  5. In Putty, go back to the Session category node and click “Save” on your profile.
    1. Now click “Open”
    2. You should see the following in Putty:
      1. Using username “root”.
        Authenticating with public key “rsa-key-20180819”
        Passphrase for key “rsa-key-20180819”:
    3. Enter your private key’s passphrase to continue.
  6. At this point you are connected to your Hass.io’s Host environment. From here you can perform a different set of administrative tasks including:
    1. Docker list images: docker ps
    2. ResinOS logs: docker logs resin_supervisor
    3. HassOS logs: docker logs hassos_supervisor
    4. Reload udev rules: udevadm control –reload-rules
    5. Re-add devices: udevadm trigger
    6. System log entries (e.g. after inserting a new USB device into your Raspberry Pi): dmesg

Troubleshooting the various errors you might see trying to connect from Putty:

      1. Disconnected: No supported authentication methods available (server sent: publickey)
        1. Solution1: You’re trying to use password authentication only. The Hass.io host is configured to only allow publickey authentication which requires saving your private key to the Hass.io’s boot partition in file “authorized_keys” and using the private key in your SSH client. Use instructions above or here to configure a key pair for use.
        2. Solution2: Your Hass.io installation might still be using the ResinOS host environment and you crossed over in these instructions  which resulted in using username “root@hassio.local”.  Use “root” as your username instead.
        3. Solution3: Your Hass.io installation is using HassOS and you’re trying to login with user “root”. Use “root@hassio.local” instead.
      2. Unable to use key file “\\foo\bar\hassio_private_openssh.ppk” (OpenSSH SSH-2 private key (new format))
        1. In PuttyGen you might have clicked Conversions > “Export OpenSSH key”, this is a different format. Use the key from Puttygen’s “Save private key” menu operation or button.

 

Categories: Uncategorized

Restart Hyper-V Guests Automatically When iSCSI Volume Goes Offline

August 13, 2018 Leave a comment

I have a couple of Hyper-V Server 2016 hosts with guest VHDX’s stored on some Synology-based iSCSI volumes. I like this setup as I don’t have to worry about maintaining reliable storage on the hosts as I’ve already invested in that with my Synology NAS and my iSCSI volumes are more than fast enough for my usage.

When I first got the guests running on the iSCSI volume I would have bi-monthly issues where the guests would go “Critical” and would refuse to boot unless I restarted the Hyper-V host. A little while later I realized my update installations and reboots of the NAS was taking the iSCSI volumes offline which Hyper-V didn’t handle well as it would get stuck “Reconnecting.” Additionally, having the iSCSI volume go out underneath running guests was a reliable way to have the guests BSOD.

Fixes for this were:

1) Reconfigure the iSCSI volumes to auto-reconnect once the Synology NAS had rebooted.

2) Use WMI event binding to handle the iScsiPrt System events for the volume going offline and then reconnecting successfully:

  1. EventID=20, “Connection to the target was lost. The initiator will attempt to retry the connection.
  2. EventId=34, “A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name.

Part 1 was mostly just reconfiguring the iSCSI connection again and ensuring that my iSCSI target was marked as a “favorite”. I still can’t explain why it would get stuck on “Reconnecting”. As of now, with all updates to Hyper-V 2016 and Synology installed, everything is working well.

Part 2 was trickier. I knew from looking at the Event Log that the two iScsiPrt events reliably occurred during the iSCSI outages. Searching around online turned up WMI eventing as a route to trigger an action after an event was logged and doing so with “permanent” events would mean they would survive host reboots. Here’s what I ended up with including my path names for reference:

c:\bin\iSCSI_Monitor.ps1

# See existing event filters, consumers, and bindings
# Get-WmiObject -Namespace root\Subscription -Class __EventFilter
# Get-WmiObject -Namespace root\Subscription -Class __EventConsumer
# Get-WmiObject -Namespace root\Subscription -Class __FilterToConsumerBinding

#
# Configure Down Events
#

$FilterNameDown = "iSCSI_TargetConnection_EventFilter_Down"
$ExistingFilterDown = Get-WmiObject -Namespace root\Subscription -Class __EventFilter -Filter "name='$FilterNameDown'"
if ($ExistingFilterDown -ne $null)
{
    $ExistingFilterDown | Remove-WmiObject -Verbose
    $ExistingFilterDown = $null
    Write-Host "Deleted existing DOWN event filter."
}

$ConsumerNameDown = "iSCSI_TargetConnection_EventConsumer_Down"
$ExistingCommandDown = Get-WmiObject -Namespace root\Subscription -Class CommandLineEventConsumer -Filter "name='$ConsumerNameDown'"
if ($ExistingCommandDown -ne $null)
{
    $ExistingCommandDown | Remove-WmiObject -Verbose
    $ExistingCommandDown = $null
    Write-Host "Deleted existing DOWN event command."
}

$ExistingBindingDown = Get-WMIObject -Namespace root\Subscription -Class __FilterToConsumerBinding -Filter "__Path LIKE '%$ConsumerNameDown%'"
if ($ExistingBindingDown -ne $null)
{
    $ExistingBindingDown | Remove-WmiObject -Verbose
    $ExistingBindingDown = $null
    Write-Host "Deleted existing DOWN event binding."
}

$QueryDown = "SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' AND TargetInstance.LogFile='System' AND TargetInstance.SourceName='iScsiPrt' AND TargetInstance.EventCode=20"
$WMIEventFilterDown = Set-WmiInstance -Class __EventFilter -Namespace "root\Subscription" -Arguments @{Name=$FilterNameDown;EventNameSpace="root\cimv2";QueryLanguage="WQL";Query=$QueryDown}
Write-Host "Created new DOWN event Filter. " $WMIEventFilterDown.Path

$CommandLineTemplateDown = "powershell.exe -command `". 'c:\bin\iSCSI_OnIscsiChange.ps1' 'Down' '%TargetInstance.Message%'`""
$ExecutablePath = "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe"
$WMIEventConsumerDown = Set-WmiInstance -Class CommandLineEventConsumer -Namespace "root\subscription" `
    -Arguments @{Name=$ConsumerNameDown;CommandLineTemplate=$CommandLineTemplateDown;ExecutablePath=$ExecutablePath }
Write-Host "Created new DOWN event consumer."

$result = Set-WmiInstance -Class __FilterToConsumerBinding -Namespace "root\subscription" -Arguments @{Filter=$WMIEventFilterDown;Consumer=$WMIEventConsumerDown}
Write-Host "Created new DOWN event binding."


#
# Configure Up Events
#

$FilterNameUp = "iSCSI_TargetConnection_EventFilter_Up"
$ExistingFilterUp = Get-WmiObject -Namespace root\Subscription -Class __EventFilter -Filter "name='$FilterNameUp'"
if ($ExistingFilterUp -ne $null)
{
    $ExistingFilterUp | Remove-WmiObject -Verbose
    $ExistingFilterUp = $null
    Write-Host "Deleted existing UP event filter."
}

$ConsumeNameUp = "iSCSI_TargetConnection_EventConsumer_Up"
$ExistingCommandUp = Get-WmiObject -Namespace root\Subscription -Class CommandLineEventConsumer -Filter "name='$ConsumeNameUp'"
if ($ExistingCommandUp -ne $null)
{
    $ExistingCommandUp | Remove-WmiObject -Verbose
    $ExistingCommandUp = $null
    Write-Host "Deleted existing UP event command."
}

$ExistingBindingUp = Get-WMIObject -Namespace root\Subscription -Class __FilterToConsumerBinding -Filter "__Path LIKE '%$ConsumeNameUp%'"
if ($ExistingBindingUp -ne $null)
{
    $ExistingBindingUp | Remove-WmiObject -Verbose
    $ExistingBindingUp = $null
    Write-Host "Deleted existing UP event binding."
}

$QueryUp = "SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' AND TargetInstance.LogFile='System' AND TargetInstance.SourceName='iScsiPrt' AND TargetInstance.EventCode=34"
$WMIEventFilterUp = Set-WmiInstance -Class __EventFilter -Namespace "root\Subscription" -Arguments @{Name=$FilterNameUp;EventNameSpace="root\cimv2";QueryLanguage="WQL";Query=$QueryUp}
Write-Host "Created new UP event Filter. " $WMIEventFilterUp.Path

#$CommandLineTemplateUp = 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Bypass -File c:\bin\iSCSI_OnIscsiChange.ps1 -state Up -eventMessage %TargetInstance.Message%'
$CommandLineTemplateUp = "powershell.exe -command `". 'c:\bin\iSCSI_OnIscsiChange.ps1' 'Up' '%TargetInstance.Message%'`""
$ExecutablePath = "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe"
$WMIEventConsumerUp = Set-WmiInstance -Class CommandLineEventConsumer -Namespace "root\subscription" `
    -Arguments @{Name=$ConsumeNameUp;CommandLineTemplate=$CommandLineTemplateUp;ExecutablePath=$ExecutablePath }
Write-Host "Created new UP event consumer."

$result = Set-WmiInstance -Class __FilterToConsumerBinding -Namespace "root\subscription" -Arguments @{Filter=$WMIEventFilterUp;Consumer=$WMIEventConsumerUp}
Write-Host "Created new UP event binding."

Write-Host Done.

Pay attention to the namespaces used, you mostly want “root\Subscription” but the event filter’s EventNamespace must be “root\cimv2”.

Then have this file sitting on disk being referenced by the WMI EventConsumer:

c:\bin\iSCSI_OnIscsiChange.ps1

$state = $args[0]
$eventMessage = $args[1]

Function Write-Log {
    [CmdletBinding()]
    Param(
    [Parameter(Mandatory=$False)] [ValidateSet("INFO","WARN","ERROR","FATAL","DEBUG")] [String] $Level = "INFO",
    [Parameter(Mandatory=$True)] [string] $Message,
    [Parameter(Mandatory=$False)] [string] $logfile
    )

    $Stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")
    $Line = "$Stamp $Level $Message"
    If($Logfile) {
        Write-Output $Line
        Add-Content $logfile -Value $Line
    }
    Else {
        Write-Output $Line
    }
}

$Logfile = "C:\bin\iSCSI_EventLog.log"
Write-Log DEBUG "New iSCSI state: $state" $Logfile
Write-Log DEBUG "Event message: $eventMessage" $Logfile

if ($state -eq "Down") 
{
    Write-Log INFO "iSCSI reported 'connection to target was lost', turning virtual machines off!" $Logfile
    Get-VM | where {$_.State -eq 'Running'} | Tee-Object -Append -FilePath $Logfile | Stop-VM -TurnOff -Force | Tee-Object -Append -FilePath $Logfile
} 
else 
{
    Write-Log INFO "iSCSI reported Initiator reconnected to target, turning virtual machines back on" $Logfile
    Get-VM | Tee-Object -Append -FilePath $Logfile | Start-VM | Tee-Object -Append -FilePath $Logfile
}

I couldn’t get named parameters working with my CommandLineEventConsumer, just positional hence “$args” usage above.

Setup

  1. In PowerShell, run:
    1. PS C:\> .\bin\iSCSI_Monitor.ps1

Testing Options

Consider using “-WhatIf” parameters for Start-VM and Stop-VM in iSCSI_OnIscsiChange.ps1 unless you want to actually Stop/Start your VMs.

Confirm actions in log file “C:\bin\iSCSI_EventLog.log”.

Option A: Create fake events with “eventcreate”

  1. CMD> eventcreate /T INFORMATION /L SYSTEM /ID 20 /SO Test /D Test
  2. Note:
    1. You will need to remove the “AND TargetInstance.SourceName=’iScsiPrt’” filter conditions as the events will show up as Source “Test”

Option B: Disable/Enable the Targets from within Synology’s “iSCSI Manager” app.

This is faster than actually rebooting your Synology NAS if you’re okay with taking your iSCSI Targets offline which means also actually turning off your guest VMs.

  1. Launch “iSCSI Manager” in your Synology Dashboard
  2. 20180814 - Synology iSCSI Manager
  3. Select your Target you wish to test taking offline, click “Disable”, then click “Yes” to acknowledge the confirmation dialog.
  4. Once you’ve confirmed in the log file the event was received then click “Enable” to turn the Target back on.  State should eventually go from “online” to “connected” once your Hyper-V host connects again.
Categories: Uncategorized

Windows Server 2016 Hyper-V “password is not correct” to Synology share

July 17, 2017 Leave a comment

I could not get Windows Server 2016 Hyper-V (10.0.14393) in a WORKGROUP to authenticate to Synology SMB share if the Synology unit (DSM 6.1.3-15152) is set to maximum SMB version 3.  I kept getting error: “System error 86 has occurred. The specified network password is not correct.” (86 == ERROR_INVALID_PASSWORD), on the Hyper-V system.  I retried typing the username and password, many times, all with the same error even though I knew the username and password were okay because I could use the same share from a Windows 7 and a Windows 10 machine. I never saw any errors in Synology’s log or in Windows event viewer about this.

I had to go back to SMB 2.0 on Synology– in DSM, Control panel > File Services > SMB > Advanced Settings > “Maximum SMB” set to SMB2.  Then Windows Server 2016 Hyper-V Core would connect to the Synology share.

Categories: Uncategorized

“No valid certificates were found on this smart card”

June 30, 2015 Leave a comment

At work we use smart cards for TFA and largely for accessing company resources remotely.  I’m currently using a Gemalto .NET smart card with an OMNIKEY Cardman 6121—a SIM-sized SC plugged into a USB dongle which is more convenient than the older full-sized SC and wired Omnikey 3021 used previously.

For years this setup has been fine.  Connect to work from home, when certificates expire renew them, and when the card runs out of space delete the expired certs.

However a few months ago I started seeing the following error instead of getting prompted for my PIN:

image

No valid certificates were found on this smart card. Please try another smart card or contact your administrator

The same smart card still worked on my laptop and on other PCs so it wasn’t a matter of a expired certs.  But complicating matters was that my home PC’s TPM, of which I had stored virtual smart cards, had those same certs expire roughly around the same time and the error “No valid certificates” can be interpreted as (I feel) “we found certs, but none of them are valid” so I spent some extra cycles making sure all my certificates were updated and valid instead of finding the real problem.  (And between then and now I also updated the same system to Windows 10 which would explain the difference in screenshots)

After putting the problem aside for a while, I eventually noticed my working laptop was using the Gemalto mini-driver version 8.4.5.0 and the system which didn’t work was using version 8.4.8.0.  I installed the Gemalto 8.4.5.0 version from the Windows Driver Catalog but whenever I selected the driver for the card it would update back to 8.4.8.0 and because I was busy with other things at the time I didn’t really press on it.

image

 .NET Gemalto search on the Windows Driver Catalog (to install, download a cab locally, unpack it, and right-click the .inf and select “Install”)

However when I went to enumerate the certificates on the card via command `certutil –scinfo > scinfo.txt` instead of getting a PIN prompt the certificates I got this error instead:

image

The smart card cannot perform the requested operation or the operation requires a different smart card

and the scinfo.txt output file indicated failures reading the key container when the same operation succeeded on my laptop:

--------------===========================--------------
================ Certificate 0 ================
--- Reader: OMNIKEY CardMan 6121 0
--- Card: Axalto Cryptoflex .NET
Provider = Microsoft Base Smart Card Crypto Provider
Key Container = (null) [Default Container]
Cannot open the AT_SIGNATURE key for reader: OMNIKEY CardMan 6121 0 …

Knowing the key container is read/written to by the mini-driver, failure to read then still hinted at some incompatibility with the particular Gemalto mini-driver version I had installed.

So I went back to Device Management (devmgmt.msc) and selected the Gemalto IDPrime .NET Smart Card node under the Smart cards node.  From there I selected “Update Driver…” > “Browse my computer for driver software” > “Let me pick from a list of device drivers on my computer” and chose the previously-installed 8.4.5.0 driver version:

clip_image001

Now when authenticating I can see the smart card LED blinking finally indicating activity and eventually the familiar PIN prompt:

image

Hooray, I can work from home again.

Categories: Uncategorized

Contents of .pdf files not being indexed in Windows 8/8.1

August 10, 2014 1 comment

Summary

I broke Windows Search indexing because I didn’t give the SYSTEM user full permission on the folders I wanted indexed.

Background

We’ve been using a Fujitsu ScanSnap S1300i to scan all incoming paperwork (e.g. receipts, bills) since October 2012.  The ScanSnap includes ABBY FineReader OCR functionality (not ABBY FineReader itself but instead ScanSnap links to ABBY .dll’s to do the actual OCR task).  When the “Searchable PDF” option is used, ABBY OCR’s the .pdf and embeds the searchable text.

Copy and paste of selected text in PDF (

A collection of searchable-PDFs is only useful if something indexes them and you can search that index.  For most Windows users, the built-in Windows Search feature more than handles the task.

A few months ago I had profiled Windows boot performance to find out why initial-logon was slow only to find out Windows Search itself appeared to be aggressively reading the disk so I culled the list of indexed folders to lessen the load.  At the same time I rearranged folders to optimize disk usage and to simplify backups.  But around that time I noticed that searching on keywords that previously returned .pdf results now instead returned “No items match your search”.

image

If at this point I had tried searching for known PDFs in other locations (outside of my D:\Scans directory) I might have found out they were being returned.  However since the vast majority of all my PDFs are within D:\Scans I didn’t even bother checking.  Since other document types turned up in search results I assumed it was just a PDF-indexing problem.

Troubleshooting steps I tried (which might help you)

1) Double-checked I hadn’t removed my scan folder from Indexing Options. I also tried removing and re-adding that folder.  I clicked the “delete and rebuild index” button between some changes thinking it’d make a difference.

It didn't, it just made the whole process take longer.

2) Ran the Windows Search troubleshooter– Control Panel > search “windows search” and clicked “Troubleshooting: Find and fix problems with Windows Search”.  I checked the “Files don’t appear in search results” checkbox though I now suspect this is just a CEIP checkbox.  I always got “Issue not present” on each of the issues checked, including “Incorrect permissions on Windows Search directories”, ha!

3) Checked and changed the HKEY_CLASSES_ROOT\.pdf\PersistentHandler registry value per these steps from Adobe: http://helpx.adobe.com/acrobat/kb/pdf-search-breaks-110-install.html .  I spent a while on this step (and the next few) because I had installed the guilty version of Adobe Acrobat Reader before and I had even installed the Adobe PDF iFilter (v11.0.01) before I learned that Windows 8 includes PDF indexing out of the box. (It’s now uninstalled because the built-in Windows PDF indexing is just fine)

4) Reset Windows Search settings.  Setting REG_SZ value SetupCompletedSuccessfully to “0” at HKLM\SOFTWARE\Microsoft\Windows Search\ reset all “Index these locations” folders in Windows Search.  Re-adding the scan directory still didn’t help get those PDFs indexed.  While I was at it I configured Windows Search to index more aggressively since I was spending time waiting for index rebuilds.

5) Checked CLSIDs and .dll registration for .pdf indexing.  (or if “Filter Description” isn’t “Reader Search Handler” and you don’t have the Adobe PDF iFilter installed)

image

To do this:

a) Stop the Windows Search service.  Open services.msc, find “Windows Search” and right-click it to stop.

b) Default value at HKEY_CLASSES_ROOT\.pdf\PersistentHandler should be {1AA9BF05-9A97-48c1-BA28-D9DCE795E93C}

c) Default value at HKEY_CLASSES_ROOT\CLSID\{1AA9BF05-9A97-48c1-BA28-D9DCE795E93C}\PersistentAddinsRegistered\{89BCB740-6119-101A-BCB7-00DD010655AF} should be {6C337B26-3E38-4F98-813B-FBA18BAB64F5}

d) If you’re running Windows 8x:

  • Default value at HKEY_CLASSES_ROOT\CLSID\{6C337B26-3E38-4F98-813B-FBA18BAB64F5}\InProcServer32 should be %systemroot%\system32\glcndFilter.dll
  • In an administrative command prompt, run: regsvr32 %systemroot%\system32\glcndFilter.dll  and confirm you get “DllRegisterServer in C:\WINDOWS\system32\glcndFilter.dll succeeded.

d) If you’re running Windows 10:

  • Default value at HKEY_CLASSES_ROOT\CLSID\{6C337B26-3E38-4F98-813B-FBA18BAB64F5}\InProcServer32 should be %systemroot%\system32\Windows.Data.Pdf.dll

f) Restart the Windows Search service

g) If you made any changes to the registry values, rebuild your search index

 

6) Checked the contents of the Windows Search ESE database (windows.edb) to verify if this is an issue with the indexer not seeing or erroring-out on indexing of the files in question or an issue of storing the indexed values into the database.  Windows.edb is a standard ESE/JET Blue database.

I also reset Windows Search again (see step #4 above) and only configured it to have Windows index a few small directories, including a sub-folder of my much larger D:\scans directory just to keep indexed values to a minimum.

Then you:

  • Stop the Windows Search service (via services.msc)
  • Copy file Windows.edb found at C:\ProgramData\Microsoft\Search\Data\Applications\Windows to another location.
  • Download and run ESE Database View (note: this isn’t my file and I cannot 100% attest to its safety but at least you don’t need to elevate when running it).  Open the previously copied Windows.edb file.
  • From the drop-down, choose “SystemIndex_PropertyStore” and do a CTRL-F search for files which should be indexed.  If they show up then the file has been indexed if not, then the file hasn’t been indexed.
  • Note: if you get “0 record(s)” after selecting the “SystemIndex_PropertyStore” table, it’s possible your windows.edb file is too large or that table is too large.  My smaller windows.edb file is 232MB, but now that I’ve got Windows indexing a much larger set of files it’s now 2.4GB.  It’s possible ESEDatabaseView cannot open ESE databases over a certain size.  Still a handy utility to know about.

image

7) Lastly, since I didn’t see the PDF files I wanted indexed in the windows.edb database, I compared this workstation to a known-working one where PDF indexing worked.  I compared all the above HKLM\HKCR values between the two systems– no difference.  I then compared the file security permissions between two files—one file on the working system which turned up in search results and one on my busted system which didn’t.  At the same time I checked the permissions which the SearchIndexer.exe process runs

image

There’s the problem—SearchIndexer.exe runs as SYSTEM and I didn’t add SYSTEM to D:\Scans’s security permissions.

image

I quickly granted the SYSTEM user permissions on all directories I wanted indexed and then rebuilt the search database.  Very quickly thereafter I started getting the PDFs showing up in search results.

Add SYSTEM user either via the “Edit…” dialog in the folder Properties > Security tab, or run something like the following in an elevated command prompt:

icacls "D:\Scans" /grant SYSTEM:(OI)(CI)F

Additionally, in my forum-crawling for solutions, I saw that others had success with copying files into the directory again to get indexing to work.  I suspect this might work for them because Windows doesn’t use the source file’s ACL but instead rebuilds it based off the folder which might be accessible to the SearchIndexer.exe process.

Categories: Troubleshooting, Windows 8

Enable touch on HP 2740p with Windows 8.1

January 25, 2014 5 comments

After upgrading my HP 2740p from Windows 8 to Windows 8.1 I lost touch.  The Wacom pen input still worked.  I re-installed PenTablet_533-3.exe and Wacom Digitizer Driver 3.0.7.24 – sp52863.exe (in that order) but touch wasn’t restored.

It wasn’t until I opened Window’s Control Panel, searched for Touch, launch the Touch Settings app, and run through the 16-point calibration was touch restored.

Image

Touch Settings v3.0.7-24

Touch Settings (WTouchCPL.exe) on my system is installed to C:\Program Files\WTouch

Categories: Uncategorized Tags: ,

Quicken 2011 Not Opening Last Quicken Data File (QDF) Used

August 11, 2012 Leave a comment

I recently installed Windows 8 RTM onto my laptop.  Quicken 2011 R8 successfully installed and I as able to open my most recent QDF file without any issues.  However, the next time I opened Quicken, and every time after that, it presented me with the “Select your existing data file to get started” default screen rather than re-opening the last open file:

20120813_Quicken2011_Windows8_NotReopeningQuickenFile_2

I re-opened Quicken again but with ProcMon running.  I noticed that it was successfully reading my directory of Quicken save files but in the end it was failing to open my QDF file because it was using the wrong file name.  It had truncated it to a shorter 8.3 filename and since a file doesn’t exist by that name it fails to open it:

I don't actually keep my Quicken files at C:\Quicken, but changed it for this image to make it simpler. =)

This was a regression in behavior from running Quicken 2011 R8 in Windows 7 SP1 where I was using the same file at the same directory without any issues.

To workaround this I renamed my original QDF file from “RyanR Quicken Data.QDF” to “RyanR.QDF” so the filename was 5 characters long– less than or equal to 8.   I then re-opened Quicken, chose the renamed file, and then exited Quicken.  The next time I launched Quicken it successfully opened the last open QDF file.

From this Intuit forum thread it seems that this is a known issue with Quicken.  I just don’t know why I never saw this problem until now.

Categories: Windows 8 Tags: ,