Windows 10 Wi-Fi – No Internet

SOS!! 22 hours with no wifi!!!!

In the past 48 hours, two different family members in different households have reported problems with their Windows 10 laptops’ Wi-Fi connections. Some basic troubleshooting — restarting the modem/router, verifying other devices could connect — demonstrated that the issue was with the laptops.

The laptop was connected to the Wi-Fi access point, with full signal strength, but there was no connectivity beyond that connection.

In the first troubleshooting effort, we did the standard things:

  1. Reboot. Of course.
  2. Disable/Enable the Wi-Fi adapter
  3. Checking adapter settings
  4. Running the Network Troubleshooter (didn’t fix things)

The Network Troubleshooter didn’t resolve anything, but it did mention something useful. It reported that the “Wi-Fi” adapter had an invalid configuration.

At this point, I turned to Google, and found a couple of sites suggesting using netsh to reset the IP configuration. We ran the following commands from an elevated command prompt (run as administrator, or it won’t work):

  1. netsh interface IPv4 reset
  2. ipconfig /flushdns

Then we rebooted, and the system came up and connected to Wi-Fi and the Internet was available again.

Subsequently, I found this Microsoft support article entitled Fix network connection issues in Windows 10, which covers may of the steps we tried as well as the steps that resolved our issues.

In Windows 10, if you run Netsh interactively, you see a notification that Netsh is deprecated, and to transition to the admittedly awesome PowerShell modules for managing TCP/IP. However, giving the specific behavior of the netsh interface ipv4 reset command (overwrites registry information; see the More Information section of https://support.microsoft.com/en-us/kb/299357), I’m not sure what PowerShell command would accomplish the same end. Something to look into.

Outlook MessageHeaderAnalyzer and Unsubscribe

Microsoft and other providers have published add-ins that provide additional functionality within Outlook and Outlook for web. We have enabled two add-ins which you may find useful, the Message Header Analyzer and the Unsubscribe Add-on.

To make them available in your Outlook (Win/Mac/Web), you need to log into mail.uvm.edu and go the Manage add-ons option on the Options (gear) menu:

Image of the options menu in OUtlook for web, with the "manage add-ins" item highlighted.

Click the check-box in the Turned on column to make one or both add-ins available in Outlook:

Once this step is complete, the add-ins you have turned on should appear in the message window in your Outlook mail clients for Windows, Mac, and the web. It may take a little while (or maybe a restart of Outlook) before they appear in the Windows and Mac versions.

Outlook add-ins as they appear in Outlook for the Web.

Outlook add-ins as they appear in Outlook for Windows.

The Message Header Analyzer provides a convenient way to view detailed information (metadata) about an email message, including the message routing information.

The Message Header Analyzer in Outlook for Windows.

The Unsubscribe add-in appears when viewing bulk marketing messages, and depending on the content of the message, may unsubscribe your address from the a marketing list or may suggest simply blocking mail from that sender.

The Unsubscribe add-in within Outlook for Windows, suggesting that we block mail from this sender.

We hope that you will find these add-ins useful. Please let us know what you think.

Scheduled tasks, PowerShell’s -file parameter, and array values

I wrote a script that accepts a comma-separated list of values, and the script worked just fine from the command-line. However, when I tried to configure a scheduled task to run the script, it always failed.

Why? Well, I started a cmd.exe session and then launched the script in the same way that the scheduled task did, using PowerShell’s -file parameter. And when I did that, the error message that I emit from the script showed me that the list was being parsed as a single string argument.

To confirm and experiment, I wrote a short little test script:

<# Cast-WizardSpell.ps1 
.SYNOPSIS 
Simple script to test parameter parsing when using -file invocation e.g.: 
powershell.exe -file .\Cast-WizardSpell -Spell 'Light','Magic Missile' 
#>
[cmdletbinding()]
param(
    [Parameter(Mandatory=$True,ValueFromPipeline=$True)]
    [string[]]
    $Spells
)
process {

    foreach ($spell in $spells ) {
        "Casting $spell"
    }
}

When run from within a PowerShell session, it works as expected:


PS C:\> .\Cast-WizardSpell.ps1 -SpellList 'Ray of Frost','Light','Detect Magic'
Casting Ray of Frost
Casting Light
Casting Detect Magic

When invoked using the PowerShell -file parameter, the comma-separated list is parsed as a single parameter (note: cmd.exe doesn’t like single quotes):


C:\>powershell -file .\Cast-WizardSpell.ps1 -SpellList "Ray of Frost","Light","Detect Magic"
Casting Ray of Frost,Light,Detect Magic

# Trying explicit array syntax, but no luck

C:\>powershell -file .\Cast-WizardSpell.ps1 -SpellList @("Ray of Frost","Light","Detect Magic")
Casting @(Ray of Frost,Light,Detect Magic)

What does work is to use the old-style -command syntax:


C:\>powershell -command "& .\Cast-WizardSpell.ps1 -SpellList 'Ray of Frost','Light','Detect Magic'"
Casting Ray of Frost
Casting Light
Casting Detect Magic

Alternatively, one can adjust the parameter syntax, adding the ValueFromRemainingArguments attribute. However, for this to work, you can’t specifiy the parameter name.


C:\>powershell -file .\Cast-WizardSpell.ps1  "Ray of Frost" "Light" "Detect Magic"
Casting Ray of Frost
Casting Light
Casting Detect Magic

C:\local\scripts>powershell -file .\Cast-WizardSpell.ps1 -SpellList "Ray of Frost" "Light" "Detect Magic"
C:\local\scripts\Cast-WizardSpell.ps1 : A positional parameter cannot be found that accepts argument 'Light'.
+ CategoryInfo          : InvalidArgument: (:) [Cast-WizardSpell.ps1], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Cast-WizardSpell.ps1

I’m not thrilled with either of these options, because some person like me may come along and, in an effort to be helpful, may twiddle the command line, thinking we’re normalizing or updating the syntax, when we’re really breaking things. However, I think using the -Command invocation is the least surprising, most consistent implementation. I’ll just make notes in the script help and in the description of the scheduled task about the reason I’ve used that method.

 

 

 

One-liner: duplicate a folder collection, without files

New fiscal year; new set of empty folders, but with the same structure and permissions as the previous year? Robocopy to the rescue:

robocopy "FY 2015" "FY 2016" /e /xf * /COPY:DATSO /log:c:\temp\new-year-folders.log /tee

/e = copy all subdirectories, even empty ones.
/xf * = Exclude filenames matching *, i.e., all of them
/COPY:DATSO = what to copy: Data, Attributes, Timestamp, Security, and Owner.

I like to log things, so I include that, too. If you’re really cautious, you could do a dry run with the /L switch, which makes robocopy just log what it would do, but not actually perform any actions. Kind of like the PowerShell -whatif switch.

Robocopy file classes

This information comes from the Robocopy.exe documentation PDF file for Windows XP version, but it’s the best description I’ve been able to find. From page 15 of that document:

Using Robocopy File Classes

For each directory processed, Robocopy constructs a list of files in both the source
and destination directories. This list matches the files specified on the command line
for copying.

Robocopy then cross-references the lists, determining where files exist and comparing
file times and sizes. The program places each selected file in one of the following
classes.

File Class In source In destination Source/Dest file times Source/dest file sizes Source/dest attributes
Lonely Yes No n/a n/a n/a
Tweaked Yes Yes Equal Equal Different
Same Yes Yes Equal Equal Equal
Changed Yes Yes Equal Different n/a
Newer Yes Yes Source > Destination n/a n/a
Older Yes Yes Source < Destination n/a n/a
Extra No Yes n/a n/a n/a
Mismatched Yes (file) Yes (directory) n/a n/a n/a

By default, Changed, Newer, and Older files are candidates for copying (subject to
further filtering, as described later). Same files are not copied. Extra and Mismatched
files and directories are only reported in the output log.

Normally, Tweaked files are neither identified nor copied – they are usually identified
as Same files by default. Only when /IT is used will the distinction between Same and
Tweaked files be made, and only then will Tweaked files be copied.

Readable System Event logs

I think I’m not alone in finding the Service Control Manager logs so many informational events as to make it hard to read the important events in the System Event logs on modern Windows systems.

I’ve used custom XPath queries of Event logs before, and decided to define a Custom view of the System event log that suppresses the events generated by the Service Control Manager that are in the Informational or Verbose catergories. Here’s the XML that defines this custom view:


<QueryList>
 <Query Id="0" Path="System">
 <Select Path="System">*</Select>
 <Suppress Path="System">*[System[Provider[@Name='Service Control Manager']
 and (Level=4 or Level=0 or Level=5)]]</Suppress>
 </Query>
</QueryList>

References:

Renaming directories with invalid names

Somehow, a client managed to create several directories with names that ended with a period. However, File Explorer and other tools (i.e., backup) are unable to access the folder contents, getting an error that usually is interpreted as “The system cannot find the file specified.”

According to KB2829981, the Win32_API is supposed to remove trailing space and period characters. KB320081 has some helpful suggestions, and also indicates that some techniques allow programs to bypass the filename validation checks, and some POSIX tools are not subject to these checks.

I found that I was able to delete these problem folders by using rmdir /q /s “\\?\J:\path\to\bad\folder.” But I wanted to rename the folders in order to preserve any content. After flailing about for a while, including attempts to modify the folders using a MacOS Client and a third-party SSH service on the host, I was prodded by my colleague Greg to look at Robocopy.

In the end, my solution was this:

  1. I enabled 8dot3 file name creation on a separate recovery volume (I didn’t want to do so on the multi-terabyte source volume)
  2. Using robocopy, I duplicated the parent folder containing the invalid folder names to the recovery volume, resulting in the creation of 8dot3 names for all the folders
  3. I listed the 8dot3 names of the problem folders with dir /x
  4. The rename command with the short name as a source and a valid new name

This fixed the folders, and let me access their contents. I then deleted the invalid folders from the source and copied the renamed folders into place.

It seems like a simple process, but I managed to waste most of a morning figuring this out. Hopefully, this may save someone else some time.

Troubleshooting Offline Files

My previous post describes the normal operation of Offline Files. And most of the time, “it just works.” But there are times when it won’t, and getting it running again can be challenging.

Two Important concepts

First, it’s important to understand that the Offline Files facility is providing a virtual view of the network folder to which Documents has been redirected when Windows detects that the network folder is unavailable. This means that, when Offline Files is really borked, users can see different things in their Documents folder depending one whether their computers are online or offline.

Second, Windows treats different names for the same actual server as if they are different servers altogether. Specifically, Windows will only provide the Offline Files virtual view for the path to the target network folder. You can see the target folder path in the Properties of the Documents folder.

The Location tab shows the UNC path to the target network folder.

The Location tab shows the UNC path to the target network folder.

For example, these two UNC paths resolve to the same network folder:

\\files.uvm.edu\rallycat\MyDocs
\\winfiles1.campus.ad.uvm.edu\rallycat\MyDocs

If the second path is the one that is shown in the Location tab in the properties of the Documents folder, then you will be able to access that path while offline, but not the first path.

Show me the logs

There are event logs that can be examined. I’ll mention them, but I’ve rarely found them helpful in solving a persistent problem. If you want to get the client up and running again ASAP, skip ahead to the Fix it section.

There are some logging options available that can help in diagnosing problems with offline files. There are two logs that are normally visible in the Windows Event Viewer, under the Applications and Services logs heading:

  • Microsoft-Windows-Folder Redirection/Operational
  • Microsoft-Windows-OfflineFiles/Operational

Continue reading

Folder Redirection and Offline Files

The following information is not new. We are in the process of making changes to our Folder Redirection policy, though, and I thought it might be helpful to have this baseline information in a place that is handy for referral.

Background

Offline Files is a feature of Windows that was introduced in parallel with Folder Redirection in Windows 2000. Folder Redirection allows an administrator to relocate some of the user profile data folders to a network folder, which has the advantage of protecting that data from loss due to workstation issues like drive failure, malware infection, or theft. It also means you can access your data from multiple workstations.

The Offline Files facility provides a local cache of the redirected folder(s) so that mobile users can continue to work with the data in those folders when disconnected from the organization’s network. When the computer is connected to the network again, any changes to either the network folder or the local Offline Files cache are synchronized. Users are prompted to resolve any conflicting changes, e.g., the same file was modified in both places, or was deleted from one and modified in the other.

Continue reading