External M.2 NVME SSD Enclosures and Heat and Failure and Heartache

Bought an nvme ssd external enclosure so I could bulk copy my data from my old laptop to my new one.

This is the one: https://amzn.to/3rgNAVq

Works great, but I learned the hard way that these will run warm, nay HOT, when you’re running them hard. And what is running a drive harder than copying nearly a TeeBee of data off of it in one chonk?

I didn’t realize what was happening the first attempt and things just seemed to lock up after robocopy had moved a whole bunch of data across. The next day I set up another batch copy and eventually it happened again- seemed to slow down, then started seeing file access errors (file is missing etc), and eventually just stopped. This was the point I noticed I could barely touch the aluminum case of the ssd enclosure.

After this second failed attempt, the drive came back online with ERRORS. Ugh. I pulled the cover off the gizmo and set up a fan to run across it, then corrected the errors and did yet another bulk copy operation. Except this time: Success.

I read more after about this and apparently these a. do run hot b. pc’s are sorta expected to have enough airflow to keep the ventilated but c. the inside of a pc case can also be too stifling for your ssd and so you need to double check that the internal fans are getting at least some flow over them. Or else you might cook that little sucker.

Seems like the little external enclosure products may want to rethink their designs and include at minimum some ventilation, but perhaps even consider a small fan.


#:~:text= is appearing in urls to your site, and you see it in other urls as well.
You didn’t put it there, why is it showing up?

Google added this feature to Chrome browser so that the text on the page matching what is sent after the #:~:text= token gets highlighted on the page (and I think scrolled to as well- need to double check this.)

Then, google.com started adding this to links from serps that use snippets of text from your webpage. Thus, google shows the snippet of text from your page on their own page, and if a user clicks the link, will be taken directly to the spot on the webpage with that bit of text from the snippet, and highlighted in yellow.

(click this link and the text above will appear actually highlighted, using this #:~:text= portion appended to this page url).

On a related note, I just noticed that the page must be reloaded for the highlighter action to work- so if the link above began with the #(the anchor url indicator), it will not activate the highlighting on the page- but if you link the the actual url for the page and append this, it will highlight AFTER the page has fully loaded/reloaded.

I just noticed bing.com has added support for this as well- so I’m guessing the edge browser likely added this feature too.

I assume this little slug can be used for other things, like tracking in analytics to see that visitors are coming to your site from clicking a snippet.

Connect WordPress with Gmail without smtp

The biggest pain in setting up a wordpress site is getting it connected with an email account, to use as the sending account for system messages. This usually requires finding some obscure smtp settings from your email provider, adding credentials that will need to be updated again if you ever change your email, and various spam triggering actions that just make it all more painful than it should be.

I used Zapier recently to automate some notifcations that are sent out by email. Zapier has a Gmail connector that is as simple as logging into your gmail account inside their app, and then zapier has the access needed. No fiddling with a bunch of settings and hoping it works- it just seems to work, and pretty well.

I am looking for the equivalent of this simplicity for use with wordpress- a plugin that allows a simple connection to gmail using your gmail or google apps domain account login info, and then it just works.

This is a work in progress- I will add any solutions I find here. Please add your suggestions in the comments.

CS1929 C# ‘ILoggerFactory’ does not contain a definition for and the best extension method overload requires a receiver of type

This error pops up pretty frequently when upgrading your .net core project to 3.0 or 3.1.

The solution apparently is to replace the old code:

private ILoggerFactory ConfigureLogging(ILoggerFactory factory)
      return factory;

With this new version:

private IServiceCollection ConfigureLogging(IServiceCollection factory)
      factory.AddLogging(opt =>
      return factory;

ASP.Net MVC and MVC Core Error 500 after editing cshtml Razor page

I’ve had this pop up a few times over the years when editing razor synctax cshtml files. There is some perfectly legal c# code that even compiles fine in your razor files, but will fail when you try to run it- specifcally, when you do this in c#:

if(condition == true)
something = somethingelse;

Perfectly legal to do a single line of code following a conditional statement. However, when this is run inside a code block in a razor/cshtml file, it will fail! So you always have to enclose your code block in enclosing braces like so:
if(condition == true)
something = somethingelse;

I feel like this was overlooked in razor version 0.1 or something and was never corrected later. It would be nice if it at least failed during compile time.

System.Threading.Tasks.Task`1[Microsoft.AspNet.Mvc.Rendering.HtmlString] in place of Html.PartialAsync

System.Threading.Tasks.Task`1[Microsoft.AspNet.Mvc.Rendering.HtmlString] showed in my cshtml page where partial views were supposed to render. This was after upgrading to asp.net core 3.1 and going through the warnings that said Html.Partial should be replaced with Html.PartialAsync now to prevent deadlocks. Great, I’ll just go replace them all… blindly, because that’s how I roll.
This resulted in the System.Threading.Tasks.Task`1[Microsoft.AspNet.Mvc.Rendering.HtmlString] appearing in the page – what the.
So you actually need to add await to the code when changing to the PartialAsync – so your call would look like this now:

@async Html.PartialAsync()

instead of the old way:


.Net Core 3.0 Released Today

Microsoft released the .Net Core 3.0 framework today. Along with a lot of other changes, the biggest news is support for desktop app development, by supporting winforms and WPF. Venturebeat has more info on the release: https://venturebeat.com/2019/09/23/microsoft-releases-net-core-3-0-with-support-for-wpf-and-windows-forms/

And Microsoft has the announcement on their developer blog here: https://devblogs.microsoft.com/dotnet/announcing-net-core-3-0/

Wordfence cannot delete files on Windows Server IIS

I’ve been running Wordfence on a number of wordpress sites I run on a windows server. Yes, you can run wordpress/php/mysql on windows. No, it’s not a great idea though. I’ve run into numerous issues with this setup and regret doing it, but it’s also been interesting to see the varying levels of support for running this configuration.

Wordfence will scan for infected files on my windows installs, and will find and list them- but when I try to delete the infected files, it always shows an error dialog stating “An invalid file was requested for deletion.” I initially thought this was a permissions issue, but after ruling this out I asked online about this error. I was surprised to not find a lot of others having the same issue, just a few. And the support forums didn’t do much to help either. I finally noticed that Wordfence lists all the officially supported operating systems, and Windows is *not* on that list. Woops.

Since php is not compiled, I decided to spend 10 minutes and see if I could find the source of this issue- and sure enough, I found that Wordfence was calculating a path incorrectly such that it was getting confused by the backslashes in windows file system. It seems to handle this fine in almost every area, but this one line was comparing two paths, and one had a forward slash where the other had a backslash- making them unequal and thus the “An invalid file was requested for deletion” is triggered.

I was able to hack the code a little to fix this, and now I have my windows wordpress wordfence working and deleting the files I request it to. You can update it yourself as well if you need this- find the file wp-content\plugins\wordfence\lib\wordfenceClass.php and update the following portion with these modifications, starting at line 4987:

$file = $issue['data']['file'];
$localFile = realpath($localFile);
$localPath = realpath(ABSPATH) . DIRECTORY_SEPARATOR;
if(strpos($localFile, $localPath) !== 0){
	return array('errorMsg' => __('An invalid file was requested for deletion.', 'wordfence'));

Note that I’m sure Wordfence is not a fan of having their php files edited, so this is fully unsupported and any updates to the plugin will likely overwrite this file and “break” it again.

Wordfence, feel free to implement this fix. Y’all are really close to working on a whole other operating system 🙂

IGSHID – the new(?) instagram click tracking ID

Apparently instagram has started adding a tracking click id named igshid that is similar in purpose to the facebook click id named fbclid- although this parameter seems to be used in links TO instagram instead of on links outbound from it as the facebook one is. I haven’t found any real info on this parameter yet, I’ll dig a bit more and update here.

WordPress Connection Timed Out Errors- debugging and repairing

Maybe you have an older, heavily modified wordpress site that has some outdated custom code, older plugins, and various other custom stuff that hasn’t been updated in some time. Your site occasionally stops responding for a while, often with an unable to establish database connection error. In Mysql you see a bunch of sleeping connections from your site, piling up until the connection limit is hit (often 100 connections) and then no more connections can be made- until all those sleeping connections finally time out (often set to 300 seconds) and clear, so the site can then start making new connections again. You try debugging this and can’t figure out why there are sleeping connections- WPDB and/or mysql client code all closes connections after use so how can they be left running? The main pages on the site are fast and responsive, so what is holding this open for so long anyway? And why the heck does it happen so infrequently, but frequently enough to be a pain?
Do this:
-Turn on error messages and logging to error file (will add details here, but there’s plenty info out there on how to do this)
-Let it run until these problems happen. Or, try to turn it on WHILE the problems are happening. The log file might get huge otherwise. Mine has a ton of deprecation warnings so it will grow rapidly.
-Once logging has run during the sleeping connection problem happening, open the log file and search for “fatal” to find fatal error messages.
-You should find a line with a fatal error message related to the database connection timing out after 300 seconds- this line will show you which php file caused the problem. In my case, it was an outdated plugin that I really didn’t need anymore, so I disabled it.

And now, back to semi-stable, or at least not crashing, wordpress bliss.

(I still hate wordpress though. well, php. I hate php. Yes! it’s horrible.)


I run a number of wordpress sites on a windows/IIS server, and many of these sites keep getting infected with some kind of trojan that seems to try to redirect traffic away to other sites, thus building “fake” backlinks and traffic. One of the files that seems to commonly show up is named zwi-cofg.php – it has a bunch of heavily disguised code inside it, so I haven’t dug into what it actually does yet- but out of all the infections I’ve seen, this file seems to be the most common. Googling it didn’t reveal anything so figured I would create a post and see if any others are discovering this file- and what you may have figured out about it? I’m also working to lock down my sites so they don’t continue to get infectected, but I’m beginning to think this is not an easy task when running php on windows servers. All the best practices I’ve seen don’t seem to go far enough. Continuing to investigate…

DNS problem: SERVFAIL looking up CAA for domain name

Certify has been a great tool for setting up free ssl, and is especially nice if you have a lot of sites- for both cost savings and the ability to auto-update certs. I use a windows based tool called Certify The Web to automate this and it’s worked great for a long while.
I started having errors recently with the tool though- along the lines of “DNS problem: SERVFAIL looking up CAA for ” followed by the specific domain name. This took a little head scratching, but apparently the problem comes from a new CAA record that DNS servers can (should?) support- it’s a record that allows you to list which certificate authorities are trusted for the particular domain.
Many/most Dns servers didn’t support this record a while back, and certify would eat the “error” message received when querying for it. But this recently changed and certify now requires that the CAA record be supported- even if it’s empty. So, error messages are no longer allowed, but empty ones are fine.
I use the DNS servers at my domain registrar for a lot of domains, and apparently the server has not been updated to support this record- so I suddenly began to get this error on my sites. I use AWS DNS for higher profile sites, and it seems to work fine.
In case you run into this with your servers, you’ll need to get in touch with your DNS provider and ask them to update it so it supports this new(er) record.

Copy Amazon cart to another account

If you have multiple amazon accounts like I do (business account and personal), there’s a good chance you’ve added a bunch of products to your cart to then realize you are logged into the wrong account. Argh.
Amazon used to have some feature to copy cart contents to a different account based on email, but apparently this created some kind of security issue and was removed. They now suggest moving your cart to another Amazon account by putting all your cart items into a wish list, then sharing it to the other user, etc. etc. Sounds like a pain.
There is also a chrome extension that helps get around this, but who the heck trusts those things nowdays?
So the hack I found that sorta helps but doesn’t completely, is as follows:

-When logged into the “wrong” account with all the items in the cart, open your shopping cart page.
-Open a second tab and then login with the correct account. In the same browser.
Now you are logged into the correct account, but you have that zombie cart part still open from the other account.
-Carefully right click each item in your old cart, and click “open in new tab”
-go through each tab, and click “add to cart”- this will then add them all to the correct cart that you are currently logged in with.

Proceed with checkout and done. Not one click, but not terrible either.

Thanos Bad Blood

Googling some stuff about Bad Blood (the book about Theranos) and comparing Theranos to Thanos- discovered a lot of people are googling “Thanos Bad Blood”. I guess we could throw some Taylor Swift in there to make the full remix for There Will Be Bad Blood and Grammar: The Musical. Or something. I don’t know anything about Thanos yet so he (or it?) was likely excluded in this working title. Something about turning to sand and disappearing… speaking of, where has Liz Holmes been lately? Anyway, if you wound up here, you might be one of those misguided googleurs and you’re actually looking for this book titled Bad Blood: Secrets and Lies in a Silicon Valley Startup. It’s on my to-read list, unfortunately under a pile of more important reads.

Update: wound up buying the audible book and listened to it while on the road. Good “read” though a bit too long and at times felt almost petty in trying to make Liz and Co look bad, but… they definitely deserved it. Her ability to manipulate people was second to almost none- perfect mix of young, blonde, confident, well spoken- oh and a complete psychopath. It’s interesting to see how she kept doubling down on fake-it-til-you-make-it, but seriously, you really want to do that with a device that will definitely- not maybe- eventually wind up killing people due to it simply just Not Working. But the part that truly demonstrates her insanity is- there just was not any real path forward to a version of their machine that would address all the issues and eventually work. So the “make it” part of the faking it just wasn’t there. This is where mental illness had to be in effect, convincing herself of things that simply were not true. And she likely believe them. Really amazing story.

FBCLID – the new Facebook Click ID querystring parameter

Facebook has begun adding a querystring parameter to outbound links- the FBCLID – aka facebook click id. This parameter is annoying for many when they try to copy and then paste or share a link to something they clicked on from facebook, as a normally clean and even short url now has this big ugly FBCLID parameter appended to it.

I assume this parameter has been added to assist with tracking facebook clicks with websites that may not have cookie tracking enabled, but it could also be to help track how links are shared with others- as this would let facebook see who visited the orginal link and then who they shared it with, as it has a unique FBCLID value assigned to it.

My hope is that this click id will allow for more granular conversion tracking for facebook ad clicks. Currently, facebook requires a pixel be installed on your landing page and any conversion events you wish to track are script executed on the page. This works well overall, but it is not optimal for lifetime value calculations inside the facebook adcenter. If an ecommerce store has an offline conversion, or doesn’t know the true value of a conversion until a day after the initial interaction, this information is difficult to load back into facebook so the ads can be auto-optimized. I’ll be researching this more soon but my hope is that the fbclid parameter will allow us to store a unique visitor ID and then load conversions with associated values back into facebook- possibly later and also perhaps multiple conversions- and have an accurate profitability number that we can then tune and optimize ad spend against. I will update as I learn more about this.