StableBit CloudDrive 1.0.0.777 BETA

Posted in StableBit on December 7th, 2016 by alex – Be the first to comment

The next testing milestone of StableBit CloudDrive is here and it’s version 1.0.0.777 BETA.

Get it here: http://stablebit.com/CloudDrive/Download

Automatic updates should be going out to older versions within the next day or so.

StableBit CloudDrive 1.0.0.777 BETA

StableBit CloudDrive 1.0.0.777 BETA

What’s New

As you can probably tell from the large jump in the version number (from 1.0.0.463), a lot of changes have gone into this new BETA, and a lot of testing has gone into it as well. While there are some new features in this version, first and foremost, this version is primarily focused on fixing bugs and improving stability. Writing some new tests was also a big priority for this BETA, and this version has passed a number of important tests, including new data consistency tests and power failure tests.

If you’d like to see the details, the full change log with all of the fixes is available here, as always: http://stablebit.com/CloudDrive/ChangeLog

But aside from the fixes, here’s a summary of the major new features in this version (as compared to 1.0.0.463):

  • New cache types (expandable, fixed, proportional).
  • ReFS support for cloud drives (Windows 8 and newer).
  • FTP / FTPS / SFTP provider.

About Backwards Compatibility

While version 1.0.0.777 is fully backwards compatible with 1.0.0.463, you should know that some under-the-hood features may not be enabled on drives created with a version prior to 1.0.0.777. For example, always-on encryption and file ID optimizations will not be enabled if you’ve created your drive with version 1.0.0.463. This is mostly a technical distinction, and functionally, any drives created with version 1.0.0.463 or older should continue to work in version 1.0.0.777.

Now let’s dive into the new cache types.

New Cache Types

Cache Types

Cache Types

StableBit CloudDrive now supports a new setting that will let you specify the cache type to use for your new cloud drive.

3 cache types are now supported:

  • Fixed
  • Proportional
  • Expandable (default)

Let’s see how each one works.

Local Cache

Let’s begin by imagining that you have a 500 GB volume that you would like to use for your cloud drive’s cache, and that it already has 200 GB worth of files on it.

Local Disk

Your your existing files on that drive won’t be affected by the StableBit CloudDrive cache.

Fixed

Let’s talk about the fixed cache type first because this is the easiest one to understand. Fixed simply means that the on-disk cache will strive to never exceed the preset cache size. So whatever you set the cache size to, that’s the maximum amount of local disk space that it will consume.

Let’s imagine that you created a 100 GB cache on your 500 GB drive:

Fixed Cache

As you use your cloud drive, StableBit CloudDrive will learn which data on your cloud drive is accessed most frequently and it will automatically cache that data locally in the on-disk cache for faster access.

The fixed cache is very simple, but it has some disadvantages. Let’s see what happens when you copy some new data onto the cloud drive:

Leaned / To Upload

As you write new files onto your cloud drive, StableBit CloudDrive will store the newly written data in the local cache and queue it up for upload. By writing the newly copied data directly to the cache, StableBit CloudDrive ensures that the file copy operation to your cloud drive completes as quickly as possible.

Once uploading completes, the data that was just uploaded remains in the cache:

Learned / New

As you may have noticed, by simply writing data to the cloud drive, you have overwritten some of the learned portion of the cache (the adaptive part of the cache that holds the most frequently accessed cloud data).

Let’s see what happens when you try to copy some more data to a cloud drive that is using a fixed cache:

To Upload

As you can see, the entire learned portion of the cache has now been overwritten with data that needs to be uploaded:

New

Once uploading completes, the cache has now lost all of its learned data, and it must relearn and re-download the data that is accessed most frequently.

Fixed cache advantages:

  • Has a predictable fixed size.
  • Maximizes write speeds by utilizing the entire size of the fixed cache.

Disadvantages:

  • Writing to the cloud drive will overwrite any learned data in the cache.

Overall, the fixed cache is optimized for accessing recently written data over the most frequently accessed data. If that’s what you’re looking for then the fixed cache is perfect for that. But for a more balanced approach, let’s take a look at the proportional cache type.

Proportional

The proportional cache type is similar to the fixed cache in that it also has a fixed size. But in addition to the size of the cache, a proportional cache allows you to define how much of the cache should be used to store data that needs to be uploaded versus data that is learned.

Once that proportion is defined, when you write new data to the cloud drive, only a part of the cache will be used to speed up the writes, while the other part will always be used to store the most frequently accessed data.

Proportional

Once uploading is complete, one part of the cache is used to store learned data and the other part will contain new data.

Proportional / New

The proportional threshold ensures that newly written data never overwrites the learned data.

Proportional cache advantages:

  • Has a predictable fixed size.
  • Does not overwrite the entire portion of the learned cache with newly written data.

Disadvantages:

  • This cache type is the least optimized for writes to the cloud drive.

Next, let’s look at the most optimal cache type, the expandable cache.

Expandable (default)

The expandable cache is optimized for the fastest reads and writes to and from the cloud drive. However, unlike the fixed and the proportional cache types, the expandable cache does not have a predictable fixed size.

When you write data to a cloud drive with an expandable cache, those writes will expand the cache’s size past its set limit. While the cache will expand as new data is written to the cloud drive, it will never consume all of the free space that is available on the volume. It will always maintain some free space as a buffer at all times by throttling the writes to the cloud drive when necessary.

Expandable

This ensures that the newly written data never overwrites any previously learned data, while at the same time, it optimized the writes to the cloud drive by utilizing most of the free space that is available on the cache volume.

Once uploading completes, the cache shrinks back down to its preset size and the previously learned data is not affected.

Learned

 

Advantages of the expandable cache:

  • Optimizes writes by utilizing most of the free space on the cache drive, while making sure that the cache drive doesn’t run out of disk space.
  • Optimizes reads by never overwriting previously learned data.

Disadvantages:

  • Does not have a predictable fixed size.
  • May consume large amounts of disk space when a lot of new data is written to the cloud drive.

The expandable cache type is the default and the recommended cache type in StableBit CloudDrive 1.0.0.777, and it was the only (implicit) cache type in 1.0.0.463.

ReFS Support

Another new feature in StableBit CloudDrive version 1.0.0.777 is ReFS support. StableBit CloudDrive can now format newly created cloud drives with the ReFS file system. When creating a new cloud drive, look under “Advanced Settings”, and you will be able to choose ReFS as the file system when using Windows 8 or newer.

ReFS

ReFS

ReFS is a Microsoft file system that is designed to be more resilient in the face of data corruption, but it may reduce the cloud drive’s overall performance by introducing some additional overhead. ReFS is only compatible with Microsoft Windows 8 and newer. Cloud drives formatted with the ReFS file system will not mount on older operating systems. So if you ever expect to attach a cloud drive to Windows 7 or older, do not use ReFS.

FTP / FTPS / SFTP Provider

In addition to ReFS support, StableBit CloudDrive now has comprehensive support for FTP, FTPS and SFTP. You can now create new cloud drives that store their data on FTP sites.

FTP

FTP

FTP over SSL is also supported (in both implicit and explicit modes), which includes the optional use of client certificates for authentication.

FTPS

FTPS

FTP over SSH support is available as well with optional private key authentication and certificate based authentication.

Kerberos based authentication (Domain\User) for FTPS can be used as well:

FTP Kerberos

FTP Kerberos

Maximum Connections

One potential issue with FTP is that it’s not uncommon for FTP servers to limit the number of connections that a user is allowed to make, and this can present a bit of a problem for StableBit CloudDrive. While there was always a way to configure the number of download and upload threads that StableBit CloudDrive uses, those threads do not necessarily correspond to connections. In the StableBit CloudDrive I/O pipeline, threads can be split or joined depending on the exact I/O operation being performed, and in certain instance, extra threads can be spawned on demand in a “thread boost” operation as well.

In order to address this strict connection limit requirement for FTP, version 1.0.0.777 introduces a new “Maximum connections” setting to the I/O performance window.

Maximum Connections

Maximum Connections

This setting is only available for providers that are sensitive to the connection count (FTP only for now), and it starts off at a very conservative default of 2. You can of course increase or decrease this connection limit (or turn it off altogether) depending on the FTP server that you’re connecting to, but if enabled, it must be greater than the upload thread count. The reason for that, is that you always want to have at least one connection available for downloads in order to maintain reasonable drive performance.

Going Forward

I expect this to be the last public BETA of StableBit CloudDrive and hopefully a 1.0 release final is not far behind. Of course the exact timing of that will depend on any future feedback and bug reports that we receive.

If you do experience issues with this BETA, as always, please let us know here: https://stablebit.com/Contact

Aside from any critical bugs that are found, fit and finish is going to be the focus as we approach the 1.0 release final. This includes finishing up the documentation and making usability tweaks to the UI.

Finally, I’d like to thank everyone for testing all of the StableBit CloudDrive BETAs that we’ve had so far, and reporting any issues encountered. Quite a bit of the fixes in 1.0.0.777 came straight from user feedback, and that just makes the software that much more stable for everyone.

StableBit CloudDrive 1.0.0.463 BETA

Posted in StableBit on February 19th, 2016 by alex – Be the first to comment

The next public BETA of StableBit CloudDrive is now available for download.

Get it here: http://stablebit.com/CloudDrive/Download

Providers

Providers

For a full change log visit: http://stablebit.com/CloudDrive/ChangeLog?Platform=win

New Providers

First off, let me mention that this build adds support for Google Drive, and Microsoft’s OneDrive is no longer marked as an “Experimental” provider. So that’s 2 more providers that are now available for use.

Message Authentication Code

Any newly created encrypted cloud drives will now use a HMAC code to verify that your encrypted data was not tampered with.

Previous BETAs of StableBit CloudDrive had used CRC32 in order to verify that the your data stored in the provider had not been corrupted, but CRC32 does nothing to protect your encrypted data from malicious modification (and that was never the intent).

Theoretically speaking, even when not using authentication, your data is safe. Without knowing your encryption key, an attacker would not be able to modify the encrypted blobs of data to achieve some specific result. But nevertheless, it’s good security policy to authenticate any encrypted data before trying to decrypt it. This is more of a belt and suspenders approach, where if there were some weakness discovered in AES in the future, an attacker would not be able to craft a maliciously encrypted blob to take advantage of that weakness, without knowing the HMAC key.

HMAC

HMAC

You can tell whether HMAC is in use on your encrypted drive by hovering your mouse over the yellow lock icon.

As a side note, full drive encryption products (like Microsoft’s BitLocker) typically don’t use any kind of authentication, just encryption.

Larger Chunk Sizes

As you know, StableBit CloudDrive stores its data in fixed sized chunks, in the cloud provider of your choice (or locally). In previous BETAs, for all cloud providers, the maximum chunk size was 1 MB. In the latest BETA, this is no longer the case. Now, all cloud providers default to storing their data in 10 MB sized chunks (and you can even increase that, if you’d like). This is important for optimizing StableBit CloudDrive for higher bandwidth connections, and reducing overhead associated with making each upload request.

This is a very comprehensive change. For example, the way that StableBit CloudDrive does data validation is now completely different. Instead of validating whole chunks, StableBit CloudDrive can now validate your data in unit sizes, and these units can be smaller than a chunk. In addition, in-memory chunk caching now occurs in unit sizes as well.

Chunk Information

Chunk Information

As you can see in the screenshot above, even though the chunk size is 10MB, validation is happening over 1 MB units. This makes it possible to download a part of a chunk, while still being able to verify its data integrity and authenticity.

Because this changes the format of how the data is stored, this only applies to new cloud drives created after this change was implemented. You can check whether your cloud drive is using large chunk sizes by hovering your mouse over the total drive size (as it’s shown above).

Server Throttling Indicator

Server Throttling

Server Throttling

For high bandwidth users, it is perfectly normal to have the server send throttling responses, and StableBit CloudDrive does respect them and perform exponential back-off to give the server some breathing room. In this build, when this happens, you’ll see an indicator in the bandwidth bar that shows either upload throttling or download throttling taking place.

Tool Tip

Tool Tip

You can hover over the turtle icon for more information.

What’s Next?

In addition to the things that I’ve mentioned here, there were a lot of other tweaks and fixes based on reported issues. So thank you everyone for submitting in those bug reports.

As for the timing of the release final, I’m hopeful that this will be the last public BETA, and that the next release will be a Release Candidate. Shortly after that, a Release Final will be made available.

StableBit CloudDrive 1.0.0.403 BETA

Posted in StableBit on October 30th, 2015 by alex – Be the first to comment

The second public BETA of StableBit CloudDrive is now available for download. It includes a lot of bug fixes, official Windows 10 support, and more.

Windows 10

Download it here: https://stablebit.com/CloudDrive/Download

Anyone using the first BETA (or any of our internal BETAs) will see an automatic update notification within 24 hours.

Reliability Improvements

First and foremost, I should say that this BETA is focused almost entirely on improving the reliability of StableBit CloudDrive and fixing all of the issues that were discovered with the last BETA after it was made public. Some of those issues were fairly serious and some of them were fairly complicated to resolve, but we did resolve them in a comprehensive and meaningful way.

If you’d like a glimpse at all of the fixes, take a look at the full change log:
https://stablebit.com/CloudDrive/ChangeLog?Platform=win

I do thank everyone for testing the last BETA and reporting issues. Many of your reports did turn into direct bug fixes.

UI Tweaks

While most of the changes in this build were focused on fixing bugs, some UI tweaks were made as well.

Windows 10 support is an important part of this build and in terms of UI, the StableBit CloudDrive window will now be drawn properly on Windows 10. It will also animate properly when minimized, maximized, and snapped to the edge of the screen.

Animation

Additionally, if you hover over the “To upload” text in the local pie chart you will get an estimate of how long it will take to upload everything, given the current upload speed.

To Upload

User feedback was tweaked as well. The amount of superfluous user feedback should now be minimized, and some user feedback messages were tweaked for clarity.

OAuth 2.0

OAuth 2.0

The OAuth 2.0 code was completely rewritten in order to provider a more consistent, easier to use, and a more reliable experience. This is a comprehensive rewrite of all the code, from the user interface, to the back-end that manages OAuth 2.0, to the storage code that stores your encrypted credentials on your computer.

The new system is backwards compatible with the old system, except for Google Cloud Storage. This is due to the fact that, in the first BETA, Google Cloud Storage used the Google SDK to store the OAuth 2.0 data. In the latest BETA, we centralized the OAuth 2.0 code, and we are now handing that for all of the providers that utilize OAuth 2.0.

OAuth 2.0

This means that if you have a cloud drive utilizing Google Cloud Storage and you upgrade to the latest BETA you will be asked to reauthorize your drive. Don’t worry, this is fairly simple and you will be guided through the process.

The Amazon Cloud Drive Debacle

Amazon Cloud Drive

Unfortunately, I am sad to announce that (for now) the Amazon Cloud Drive provider is no longer supported for production use with our product. This is something that we’ve been going back and forth with Amazon about for a while. So we do have a dialog open with the Amazon Cloud Drive team regarding this issue, and we are trying to find a mutually agreeable solution. I am very hopeful that we can resolve this before the 1.0 release final.

The problem comes from the fact that StableBit CloudDrive scales really well… Given sufficient bandwidth, it will saturate your uplink until the weakest link in the chain fails. From the emails that we’ve received from Amazon, this seems to be causing server load issues for Amazon.

Because of this, for now, the Amazon Cloud Drive provider will be classified as an “Experimental Provider”. If you’re currently using the Amazon Cloud Drive provider, you should stop doing so, at least until we can reach some sort of agreement with the Amazon team on how we can best resolve this situation.

Hopefully Amazon can come up with a comprehensive solution that will work for everyone.

Introducing StableBit CloudDrive

Posted in StableBit on May 28th, 2015 by alex – 2 Comments

I am very pleased to announce that today we are launching a brand new product called StableBit CloudDrive as a public BETA.

StableBit CloudDrive

StableBit CloudDrive aims to be the best way to securely store your data in the cloud on Microsoft Windows.

You can download it here: https://stablebit.com/CloudDrive/Download

What it Does

  • StableBit CloudDrive creates a new virtual drive on your PC that stores its data in the cloud.
  • You can optionally encrypt your entire cloud drive with a key that only you know for trust no one full drive encryption.
  • StableBit CloudDrive learns which data you access most frequently and stores that data in a cache on one of your local drives for quicker access.

You can also use it locally in order to create fully encrypted virtual drives.

For a full set of features you can take a look at: https://stablebit.com/CloudDrive/Features

StableBit CloudDrive is an Actual Drive

A Real Drive

StableBit CloudDrive answers the question, how can we best extend the Microsoft Windows operating system to support secure (encrypted) cloud storage?

The answer is, we emulate our own virtual drive in the kernel with full Plug and Play support. Because this emulated drive is not a physical drive, there is actually nothing physically attached to the system, but as far as Microsoft Windows is concerned it looks and acts just like a real physical drive.

Why is this important?

A cloud drive created by StableBit CloudDrive is compatible with almost all of your existing applications and integrates very well with existing Operating System level features. With full drive encryption enabled, it’s also fully secure against any adversaries who might want to get access to your data.

Encryption

Full Drive Encryption

StableBit CloudDrive features trust no one full drive encryption, giving you piece of mind that your data is safe from any adversaries.

StableBit CloudDrive’s full drive encryption doesn’t only encrypt your data in the cloud, it also makes sure that any data stored locally in the on-disk cache is encrypted as well.

In fact, StableBit CloudDrive encrypts your data as soon as it’s written to the cloud drive and decrypts it only when it’s read, offering full round trip encryption. This means that at no point is your encrypted data written to disk in an unencrypted form, either locally or in the cloud.

Performance

In order to optimize performance, StableBit CloudDrive features a number of important optimizations, one of which is local caching.

Local Caching

Create a New Drive

When creating a new cloud drive, you have the option of specifying how much data you would like to be cached locally.

Over time, StableBit CloudDrive will learn which data is accessed most frequently on your cloud drive and it will store that data locally for quicker access.

A Different Type of Cache

If you’ve ever heard of the Operating System’s cache, this is not that. This is a new type of cache that sits between the Operating System’s in-memory cache and the cloud. It was specifically designed in order to optimize accessing data from the cloud. It’s typically much larger than the in-memory cache that the Operating System maintains, and so it’s able to cache far more data.

This means that you need to access the cloud less frequently, giving your drive better performance and a better overall user experience.

The Prefetcher

Prefetcher

StableBit CloudDrive also features its own prefetcher on top of the local cache. This prefetcher detects sequential data access and starts pre-downloading data that you are about to read in advance.

This is perfect for playing back media smoothly directly from the cloud (provided that you have sufficient bandwidth).

It’s a BETA

StableBit CloudDrive has been in development for over a year now and it was mostly written entirely from scratch (some code was borrowed from StableBit DrivePool, but mostly everything is brand new). StableBit CloudDrive doesn’t use any 3rd party “disk in a box” solutions, everything was custom written in order to ensure the best possible implementation.

But, keep in mind that this is a 1.0 BETA, and so you will be testing a product that is still in development and there are bound to be issues that you may encounter.

If you do encounter a problem, you can let us know here: https://stablebit.com/Contact

StableBit CloudDrive and the StableBit Scanner

When using StableBit CloudDrive together with the StableBit Scanner you get an additional benefit of having the file system on your cloud drive scanned periodically for damage.

You should ideally use StableBit Scanner 2.5.2.3100 or newer when using it together with StableBit CloudDrive. Older versions of the StableBit Scanner will recognize your cloud drive as a regular disk and will attempt to scan its surface. While there’s nothing technically wrong with that, and it will work, it will cause excessive bandwidth usage and it’s something that is off by default in StableBit Scanner 2.5.2.3100.

Pricing Changes

As of today, we are changing our pricing structure.

Here are the new prices of a personal retail license for new customers:

StableBit CloudDrive Pre-Order

For new customers who are purchasing a personal retail license for StableBit CloudDrive, there is going to be a flat $5 discount for all pre-orders while the initial 1.0 BETA is ongoing.

Existing Customers

Existing customers get $10 off of the retail price of each product.

To get the discount:
  1. Visit: https://stablebit.com/Buy
  2. Enter your existing Activation ID at the bottom of that page in order to apply your discount.

The StableBit Bundle

New customers can purchase all of our products for $54.95 (which will go up to $59.95 after the initial StableBit CloudDrive BETA is over).

It’s the best deal and of course includes all future updates and gives you the option of purchasing any future products at a discount.

Buy the bundle here: https://stablebit.com/Buy

StableBit Scanner 2.5.1.3062 Release Final

Posted in StableBit on October 2nd, 2014 by alex – Be the first to comment

StableBit Scanner 2.5.1.3062 is now available as a release final.

StableBit Scanner – 2.5.1.3062

Get it here:
https://stablebit.com/Scanner/Download

Compared to StableBit Scanner version 2.4, this version is a massive upgrade, perhaps the biggest one we’ve done yet.

What’s New in 2.5

As the 2.5 BETA was progressing, I’ve been covered some of the new features in previous blog posts, so I won’t go into too much detail on those topics here.

Remote Control

StableBit Scanner 2.5 – Remote Control

Remote control is a way to manage your StableBit Scanner installation from another machine on your LAN. It’s fully automatic and super simple to use.

I’ve previously posted about it here.

New Notification Options

StableBit Scanner 2.5 – Notification

The StableBit Scanner 2.5 features a completely brand new notification system. You can receive notifications via Email, SMS, Speech, Twitter, or you can have them sent to your mobile devices (Android / iOS / Windows Phone / Windows).

You can read up on this new feature here.

Cloud Integration Enhancements

StableBit Scanner 2.5 – Disk Details

This actually took a lot of work, and is something that’s mostly invisible to the user, but I think that it was important and worth it, as it greatly improves the quality of the product as a whole.

Here’s a summary of what this means:

  • First of all, thanks to an update to the engine that powers the StableBit Scanner’s unique SMART interpretation system, the StableBit Scanner now knows about more specific information about each drive model. Things such as the maximum operating temperature and drive reliability figures are now available for each drive model.
  • This lets us do much more intelligent temperature control and customized overheat warnings, depending on the the use case scenario (Desktop vs Server vs Laptop, etc..). Everything is automatically configured for you, but you can tweak the settings if you want to.
  • The new data also enables us to issue more intelligent SMART warnings. For example, the StableBit Scanner now knows about the maximum load cycle count, per drive model, so it uses that information to determine when to issue a warning.
  • When available, the warranty period and drive reliability information are now available under Disk Details.

You can read more about these improvements here.

SSD SMART Interpretation Improvements

Since SSDs operate fundamentally differently than hard drives, the typical set of SMART interpretation rules that apply to spinning drives mostly don’t apply to SSDs. But unfortunately, instead of using one unified set of SMART attributes for all SSDs, each SSD controller manufacturer has chosen to use their own proprietary set. To make things worse, they generally refuse to publish their SMART specifications, which makes interpreting SSD SMART data correctly all that much more difficult.

But this is where the StableBit Scanner can really shine. Because our SMART interpretation rules are cloud powered, we can keep them updated with new rules as new SSDs are released, without pushing out software updates. So your SMART data can actually improve over time as our SMART interpretation rules evolve.

I am proud to say that, as of right now, the StableBit Scanner has SMART interpretation rules for every SSD that it has ever seen, and it can only get better from here. This took a lot of effort and I’d like to make a point of it.

I’d also like to thank those who have chosen to submit their SMART data to BitFlock, this helps us improve our SMART interpretation rules and makes our job a little easier.

New UI Themes

I’d categorize this as a nicety, as it doesn’t really improve the core functionality of the product, but it was requested a number of times, so here you go.

StableBit Scanner – Flat UI

You can read a bit more about the new themes here.

StableBit Scanner 3.0

Let’s talk a bit about the future of the StableBit Scanner.

I think that the StableBit Scanner 2.X line turned out nicely and is a worthy followup to StableBit Scanner 1.0 (which ran on the original Windows Home Server). But it’s time to grow the software into something bigger.

StableBit Scanner 3.0 will be the next major release and it will add a fantastic new capability to the core scanning engine, among other features. I don’t want to talk about this yet but I can’t wait to get a BETA of this out to the public, it may just knock your socks off.

StableBit Scanner 3.0 will also feature StableBit Cloud integration. The exact specifics of the StableBit Cloud are still being fleshed out and I’ll talk about it in some detail once there’s a working prototype.

You can read some more about the StableBit Cloud and how its development is progressing on our development wiki right here:
http://wiki.covecube.com/Development_Status#StableBit_Cloud

You can also find a list of some of the other things that we’re working on right now on that wiki.

Our Next Product

Coming up next, we’ll introduce a brand new StableBit product called StableBit CloudDrive. Stablebit CloudDrive is a huge project, on the scale of Stablebit DrivePool, and has been in development since late 2013.

I’ll have a blog post ready along with some screenshots, once I have a 1.0 public BETA.

Why using StableBit Scanner is a good idea

Posted in StableBit on October 2nd, 2014 by Christopher Courtney – 2 Comments

Hello, I’m Christopher, and I’m the Director of Customer Relations here at Covecube Inc. For those that may not recognize me, I have been very active in the Windows Home Server community, where I usually go by the username of “Drashna”. I have even been awarded the Microsoft MVP Award for Windows Home Server for the tech support I’ve provided in the forums and how-to guides that I’ve written for Windows Home Server.

We tend to get a lot of questions about the StableBit Scanner, what it does and some of the values that it presents. So let me try to answer some of those questions here, and explain a bit more about what the StableBit Scanner does, and why it’s a great utility for maintaining the health of your disks.

I will apologize now for for the amount of text here. There is a lot of information that I want to cover, and I don’t want to skim over any of it. So if you will bear with me, let’s cover exactly what the StableBit Scanner does, and why you should install it.

S.M.A.R.T. Data

First, lets talk about the SMART data that the StableBit Scanner is able to pull from the disks. This data is pretty much universally accessible on any drive you can buy, whether it’s a “spinning” hard drive, or if it is a Solid State Drive. Most of the information is pretty standard, but there are some more device specific values depending on the manufacturer of the device. And there are plenty of utilities out there to read the SMART data from your disk. For the most part, they all read the SMART data from the disks and interpret that data in a meaningful way for users. Some just show the raw output and let you know if the values are outside of manufacturer specification.

Let’s talk about some of these SMART values and what they mean for your system. It’s always a good idea to know what’s going on.

  • Reallocated Sector Count” and “Reallocation Event Count” is probably the value that you will see increasing most often. What this means is that the disk has detected an issue with a bad section of the disk, and has reallocated the sectors to a special reserved (spare) area on the disk. This happens automatically, and prevents the disk from using these spots in the future.
    This is normal, and typical on a HDD and one or two appearing once in a while isn’t necessarily a bad sign. However, if you see this value rapidly increase on a disk, or you have a lot of them, then there may be damage to the physical medium of the drive and you may want to replace it immediately.
    Though, as this value increases, the performance of the disk may be adversely affected. The remapped data will be at another location on the drive, causing the read speed to be decreased due to “seek time” for the new location. And the more Reallocated sectors you see, the more that this will happen. So if performance is very important, it may be worth replacing the drive sooner rather than later.
  • Spin Retry Count” is a value only found on HDDs, obviously. It shows the number of times that the drive has failed to spin up to full speed and had to retry to spin it up. This indicates a serious mechanical failure of the platters. There are a number of possible causes, but none are good. It means that you should remove the data from the drive immediately and replace the disk.
  • Current Pending Sector Count” and “Uncorrectable Sector Count”  – These two values tend to go hand in hand. These means that the disk has encountered issues reading from the drive. In fact, if you force a surface scan at this point, you may end up with the same number (or more) sectors as indicated by this value. The drive will attempt to write to these sectors eventually, and when that happens, it’s either able to and clears this value for that sector or it fails and forces the disk to remap the sector. By “remap”, I mean that this will trigger a “Reallocated Sector Count” increase. This all happens automatically in the course of normal usage. Things like a full format, writing zeros to the disk, or utilities such as SpinRite try to force this process to happen quicker.
  • Load Cycle Count” – This is a value that we get asked about a lot and one that can rapidly increase. Specifically, this is the number of head parking cycles that the drive has performed. Parking the heads is a normal process of the drive, and helps prevent accidentally damage to the drive. This occurs when the drive idles. Depending on how this is configured on the drive, and how active the drive is, this can grow very slowly or can increase by 100 or more in a single hour. Western Digital Green drives are particularly notorious for being poorly configured and rapidly increasing the count.So this is a value that should be taken with a grain of salt. Watch it yourself, and if only slowly increases, then you may be able to trust that it’s accurate. And in that case, it may be a good indicator of age and usage. However, this value doesn’t necessarily indicate an issue. It’s much like the “Power on hours” or similar statistical information.
This isn’t a comprehensive list, by any means. But these are some of the most common SMART warnings you will see. And definitely, some of the more important values to know.

SMART data can be a good indicator of mechanical problems, however, it is reactive technology, for the most part. It’s designed to predict immediate failure, and it can’t predict into the future the exact point in time at which the drive will fail, and it’s not designed to. It’s akin to klaxons on a ship, letting you know that something is wrong, and to scramble to fix the issue.

That brings us to the next subject.

Surface Scanning

Now for the “blood and guts” of what the StableBit Scanner does.

By default, the StableBit Scanner is configured to do a surface scan of the disks in the system. What do I mean by a “surface scan”? The StableBit Scanner does a sector by sector scan of the entire disk, ensuring that each and every sector on the drive is readable. And when it finds sectors that are not readable, it flags them and keeps on scanning the rest of the disk.

Now, why is this important? Because after time, the “bits” on the disk may degrade. Over time, if the data is not accessed at all, it can lose it’s stored state and this is what is referred to as “bit rot” usually (different from “random bit flips”). By reading this, it gives the drive’s onboard diagnostics tools the opportunity to repair  the section, or just remap it if it needs to, before becoming an issue. This process is called “Data Scrubbing“, and helps your disk identify potential problems before they affect your data. This can be noticed by changes in the SMART Data values on the drive (such as the lowering of the uncorrectable sector count, or an increase in the Reallocated Sector Count).

Wear and Tear

Though, there is a good question that has been raised to us at least a couple of times: Does this surface scan put additional strain on your disks? 

For Solid State Drives? Absolutely not. They are designed to be read from many times without any degradation of the drives.

For conventional hard drives? That’s not as straight forward. But basically, any time the drive reads or write data, there is a chance of damage occurring. However,  modern drives are very, very good at preventing this from happening.

The other concern here would be wear on the mechanical parts of the drive, the parts that spin the patters, and the parts that move the read/write heads. By default, the StableBit Scanner is configured to do this intensive surface scan every 30 days. What does this mean for the disk? That it’s reading the entire surface of the drive, the full capacity of the disk, once a month. That’s a good amount of work for the drive, and that will happen often. 

Well, what if I scan a 3TB drive once a month? That’s about 35TB read in a year. Once a week? That’s about 140TBs read in a year.  Okay, that’s a lot of reads over a years time. However, how does that compare to normal usage? Well, do you backup the drives? If so, the entire contents of the disk are read, or every sector is read, depending on the backup utility. What about Windows Search? Or Previous Versions? Or how about streaming from the disks? And how often does this happen? Well, that really depends on your usage.

And to get some perspective here: I have several 3TB drives in my system, that are getting close to 2 years old. I move data around a lot. So what do my drives look like? Well, most report in the ballpark of 50-100PBs of reads and writes. That’s PB (petabytes). Each Petabyte is about 1000 TBs. So thats 50,000-100,000 TBs of reads. If I scan once a month? That’s not even 1% of the total reads from the disk. And while I may not be a typical user in a lot of ways, it should give you a good idea how little of an impact these surface scans have on your disks. And disk are designed to last years, even under heavy usage.

Damaged Sectors

Now what are these damaged sectors that the StableBit Scanner finds and what does it mean to you?

Damaged sectors are bad sectors on the disk that the Surface Scan has issues reading. It means that… well, that it is likely damaged, and during normal operation, you may get an error accessing affected files (or even experience file system errors). These damaged sectors are the same ones that are identified by the “/r” switch on the CHKDSK utility.

Now, you may be asking yourself, why run StableBit Scanner and let it recover that data instead of CHKDSK? Well, you should and shouldn’t. It depends on what you want to do.

  • CHKDSK does a “best effort” to recover the data. It attempts to read, and then move the data. However, once it determines that it can’t recover it, it reallocates the bad sector and makes that data unrecoverable. And depending, it could potentially corrupt the data, or even lose sections of it. 
  • StableBit Scanner cares about recovering data firs and foremost. Once it’s identified damaged sectors, you can run a “file scan” which attempts to figure out what was damaged on the system and what files are affected. Then it lets you attempt to recover that data. In fact, StableBit Scanner uses 20 different “head placement profiles” to attempt to read the data. This is a lot more aggressive that the CHKDSK utility’s attempt to read the data.
  • StableBit Scanner does not repair this damage on the disk. If it fails to read the files and cannot recover that data, you can still run a data recovery utility to attempt to recover that data as well.
  • Again, if you run CHKDSK with the “/r” flag, it fixes the sector by reallocating it, meaning that you lose the ability to ever recover this affected data. This is because the data has been overwritten or the location remapped. So the data is no longer available for recovery.
And to re-emphasize here: StableBit Scanner does not fix damaged sectors. We are more concerned with recovering your data, than about repairing the damage here. The disk will eventually take care of this, or you can force it by using the “/R” flag for CHKDSK.

Conclusion

All in all, the StableBit Scanner is a great tool to inspect and maintain the health of your disks, and the data that is on them.

StableBit Scanner 2.5.0.3041 BETA – Drive Reliability

Posted in StableBit on July 10th, 2014 by alex – 2 Comments

StableBit Scanner 2.5.0.3041 BETA is now available for download and it comes with enhanced cloud enabled features along with 10 new themes.

Download it here: https://stablebit.com/Scanner/Download

New Themes

People have asked us, when is the StableBit Scanner is going to get a visual refresh to get a more modern look? Well, here you go.

The new version has new “Aero Glass” style themes that match the style of Windows 7:

Aero Glass Themes

It has new “Modern UI” themes that match the style of Windows 8:

Flat Themes

It even has a touch friendly theme:

Touch Theme

Cloud Enabled Drive Reliability Information

Drive Reliability

Starting with this build, the StableBit Scanner will now show drive reliability information that has been published by the manufacturer. You can find this new information in the Drive Details window under the new Reliability section.

Drive manufacturers typically publish drive reliability statistics in drive specification “data sheets”, like the one pictured below.

Drive Specifications

In the latest version of BitFlock, the back-end service that powers the StableBit Scanner’s powerful SMART interpretation engine, in addition to providing the usual SMART interpretation data, we now provide drive reliability information for each specific drive model.

The new types of information provided are:

The Warranty Period

Not all drive models have a published warranty period, but for the ones that do, you will see the warranty period shown in the new Reliability section. Sometimes warranty periods differ from country to country. The warranty periods that the StableBit Scanner displays are based on information published for drives sold in the US.

Mean Time Between Failures (MTBF / MTTF)

Theoretically speaking, the MTBF (sometimes referred to as the MTTF) is the average amount of time that is expected to pass before a particular model drive is expected fail. When this number is calculated, the manufacturer assumes that the drive is running at an optimal temperature (typically around 40 degrees celsius) with a typical workload (or duty cycle) for that model. The expected duty cycle is typically based on the type of  drive. For example, an enterprise level drive is typically expected to have a 24×7 duty cycle (also known as 8760 hours per year).

Unfortunately, knowing the MTBF is of limited value for traditional spinning hard drives. Unlike SSDs, spinning hard drives don’t have anything that is “used up” and predicting their failure rates using MTBF numbers is really of limited value. The hard drive manufacturers know this and that’s why most newer drives now report their failure rates using the Annualized Failure Rate (AFR), which is a more useful metric. I’ll talk about AFR in a bit.

Unlike spinning hard drives, knowing the MTBF for SSDs can be more useful. Due to the nature of the technology that powers all SSDs, they all have a finite amount of data that can be written to them,. As as result, a finite amount of time can be calculated until expected drive failure. In addition, SSD manufacturers typically publish drive Endurance numbers which are even more useful for determining the expected lifetime of a SSD drive, and I’ll talk about those shortly.

Annualized Failure Rates (AFR)

Given the limited value of MTBF as it relates to spinning hard drives most newer drives publish their expected failure rates in terms of an AFR. In short, the AFR is the percentage of drives of a particular model that are expected to fail every year.

The StableBit Scanner is able to calculate the AFR for a drive model if there is a published MTBF. The calculation is done assuming a 24×7 duty cycle. It will be noted in the UI if the AFR is coming from a published specification or is calculated from the MTBF.

Component Design Life (CDL)

This is an interesting metric that is rarely published but is useful to know when it is. It essentially tells you how long the drive manufacturer expects the drive’s components to last. This is in contrast to the warranty period, which is typically shorter.

If there is a published CDL for your drive model it will be shown in the Reliability section.

Endurance

Drive endurance generally relates to SSDs exclusively. Because of the technology that SSDs employ, they have a finite amount of data that can be written to them over their entire lifetime.

SSD manufacturers typically express drive endurance in the amount of data that can be written to the SSD per day and the amount of data that can be written over the total lifetime of the SSD.

Drive Reliability

In the above screenshot you can see that the selected SSD is rated for 37.3 GB of writes per day or 66.5 TB of writes over 5 years.

As a technical sidenote, the StableBit Scanner reports all byte measurements in binary (see http://en.wikipedia.org/wiki/Binary_prefix) and not decimal. Manufacturer published data is converted to binary where applicable.

Maximum Operating Temperature

SMART Temperature

In previous versions of the StableBit Scanner, a drive overheating warning was issued when the drive’s temperature exceeded a static temperature threshold that was specified in the Scanner’s settings. You were also able to override the maximum temperature of each drive in “Disk Settings”. Starting with this build the drive overheat warning behaves more intelligently.

In the new build, the maximum operating temperature of each drive in the system is retrieved independently in the following way:

Determining the Maximum Temperature

In general:

  • If there is a maximum temperature value specified in disk settings, then that value is used.
  • If the drive’s firmware publishes the maximum operating temperature via SCT, then that value is used.
  • Alternatively the StableBit Scanner uses the data from BitFlock in order to retrieve the maximum operating temperature for a particular drive model, as it is published in the manufacturer provided data sheets.
  • If the above methods fail, then the StableBit Scanner uses the static maximum temperature defined in Scanner Settings, as it did in previous versions.

The source of the maximum temperature is shown in the SMART window, under the “Temperature” attribute.

Temperature Source

But simply getting a warning when the drive already exceeded the maximum operating temperature may not be ideal. You may want to receive an overheating warning when the drive’s temperature starts to approach the maximum operating temperature so that you can take corrective actions in order to prevent the drive from exceeding the maximum operating temperature.

Scanner Settings – Heat

To fulfill this need the StableBit Scanner now allows you to specify a temperature “warning threshold”. In other words, if a drive’s temperature approaches its maximum operating temperature limit within the specified number of degrees, then an overheating warning will be issued.

The quick settings profiles will set up an appropriate temperature warning threshold for you.

Quick Settings – Server

  • Server
    • Static maximum temperature: 44 C
    • Warning threshold: 15 C
  • All Other Profiles
    • Static maximum temperature: 54 C
    • Warning threshold: off

Considering that servers are meant to be on 24×7 and that the typical maximum operating temperature of a spinning drive is typically 55 C to 60 C, and that the optimum temperature should be around 40 C to 45 C, a 15 degree warning threshold seems to be appropriate.

SSDs are sensitive to high temperatures as well and typically have a maximum operating temperature of around 70 C. Some newer models will even slow themselves down in order to prevent themselves from overheating.

Load Cycle Counts

SMART – Load Cycles

Starting with this build the StableBit Scanner now retrieves the maximum load cycle count that each drive model is rated for, according to the manufacturer’s published data sheets. If your drive exceeds this limit then a warning is shown. Previously a warning was shown if the drive exceeded 300,000 load cycles, which is a common figure but does vary greatly among different drive models.

I’ve gotten asked a number of times about the severity of this warning and whether it indicates impending drive failure. My experience is that it doesn’t, but it’s still useful to know when the drive is operating past its designed tolerances and is potentially in danger of developing problems in the future.

Cloud Data Quality

This build of the StableBit Scanner comes with a number of new features that rely heavily on the quality of the data provided by the BitFlock cloud. In the next few months BitFlock is getting a major update in order to improve the number of drives that it recognizes and has metadata for. I’m expecting to see 100% drive model coverage for all of our StableBit Scanner customers, so if you don’t see metadata shown for your particular drive model today, the data is being updated.

Right now we have full coverage of all Western Digital and Seagate drives that any StableBit Scanner installation (with Cloud Features enabled) has ever seen, while other manufacturers may be a bit more spotty. SSDs are also a top priority so the quality of SSD SMART data will improve across the board as well.

Unfortunately some hard drive models don’t have any publicly published specifications for them. In particular, it’s difficult to find any meaningful specifications for drives that are sold in external enclosures. So some drive models will end up having little to no additional drive reliability information available about them. But overall, these seem to be few and far between.

Release Finals

Coming up next for the StableBit Scanner is the 2.5.0 release final build. StableBit DrivePool 1.3.6 will also be released as a final and the StableBit Scanner 1.0.6 will be as well.

StableBit DrivePool 2.1.0.558 Release Final

Posted in StableBit on June 17th, 2014 by alex – Be the first to comment

StableBit DrivePool 2.1.0.558

StableBit DrivePool 2.1.0.558 Release Final is now available for download:
http://stablebit.com/DrivePool/Download

In this post I’d like to make a quick recap of what’s new in this build since the last release final (2.0.0.420).

Bug Fixes

It has been quite a while since the StableBit DrivePool 2.0 release final and this build has accumulated quite a few bug fixes since then. Many issues were addressed in this release including UI glitches, service issues and file system issues.

Here are some of the highlights:

  • User Interface:
    •  Minimize / Maximize buttons were added to the horizontal UI.
    • Tasks such as balancing and background duplication can now be boosted in I/O priority or completely aborted from the UI.
    • There is a new Troubleshooting menu under the settings menu which allows you to collect a boot time file system log, file system tracing data or to reset StableBit DrivePool’s settings to a clean install state (with your duplication settings and pooled data not affected).
    • You can now send a test email message after entering an email address for notifications.
    • Disk tooltips were not showing up sometimes.
    • The disks list was sometimes redrawing in a “glitchy” way causing an annoying flicker.
    • The disk index (or number) is now shown in the disk tooltip.
    • Performance statistics will now update in 2 second intervals (instead of every second) in order to make them more readable.
    • There is a new community submitted Bulgarian translation (thanks!). To contribute other translations visit: http://translate.covecube.com
  • Service:
    • Got rid of excessive error report generation in some places.
    • Pool performance statistics were being reported for all pools in aggregate instead of for each pool individually.
    • When the last disk that is part of a pool goes missing or is disconnected, the pool’s settings will be kept for 365 days just in case that pool comes back. Previously, that pool’s settings would have been reset.
    • Balancing plugins are now given 30 seconds to make their balancing calculations or else they will be forcefully aborted. This prevents a balancing plugin from stalling a balancing pass.
    • The pool will always be measured first before performing other tasks, such as the background consistency check. This change was made mainly to thwart user confusion over incomplete pool measurements showing up in the UI.
    • Improvements were made to the disk emptying algorithm when balancing.
    • Disk removal no longer requires a remeasure pass after it completes.
  • File System:
    • The virtual disk enumerator now registers as a storage controller making our virtual disk be more compatible with 3rd party software that query for its controller.
    • Reparse points can now be created on the pool. This adds support for symbolic links, junction points and mount points on the pool. This one took a lot of effort to get done.
    • Fixed a consistent system crash on x86 systems triggered by heavy I/O.
    • Byte range locks were not following the same behavior as NTFS leading to issues with all kinds of 3rd party software.
    • On drive removal, empty “PoolPart…” files were being created needlessly.
    • A file size tracking inconsistency was fixed that was causing the pool measurement to drift.

For a full list of changes and fixes you can refer to the change log here:
http://stablebit.com/DrivePool/ChangeLog?Platform=win

File Placement

File placement is a new feature of StableBit DrivePool 2.1 and it gives you the ability to specify which disks will be used to store files in one or more folders on the pool. Further down I’ll show you some examples of how I’m using this feature to organize my pools in a way that wasn’t possible before.

Balancing…

File placement is accessible from the Pool Options menu under Balancing…

File Placement

This new feature adds the flexibility of per disk file organization with the convenience of drive pooling. It’s designed to give you the best of both worlds.

I’ve already talked extensively about the file placement UI and how it works in my previous blog posts, so I’m not going to repeat that. If you’d like you can refer to those blog posts here:

Instead, in this post I’m going to give you some practical examples of how I’m using file placement myself in order to optimize my virtual machine usage.

File Placement and SSDs

First, I’m going to show you how I’m using file placement combined with SSDs in order to seamlessly optimize the performance of one or two VMs on the pool.

In one of my machines I have a pool with a small SSD drive and a few larger spinning disks. I have a few virtual machines that I use, but some of them I use more often than others and I would like to place those frequently used VMs on the SSD drive for better performance. I started out by placing 2 virtual machines on the SSD drive using file placement.

Example

This gave me a tremendous performance boost whenever I used those VMs and it worked out great.

But as my usage changed, and the virtual machines grew, I decided to get a second SSD for my virtual machines and to spread them out over both SSDs. All I had to do is add the new SSD to the pool and designate my virtual machine folders to be placed on that disk as well.

Example

This approach allowed me to create a type of “sub-pool” that can be easily expanded without manually moving files around from disk to disk.

In the future, I can even designate a separate SSD for each VM, all on the same pool.

Example

The great thing about doing this with StableBit DrivePool and file placement is that the path to the VMs doesn’t change, regardless of where they’re actually stored. My virtualization software (WMWare Workstation) always sees my virtual machines on the same pool drive letter and I don’t have to worry about managing multiple pools or dealing with OS licensing issues (that tend to crop up when moving VMs around). All I do is flip a few check boxes in StableBit DrivePool and that’s it.

File Placement and Disk Throughput

I’ll give you another example that doesn’t involve SSDs.

On another system that’s mostly dedicated to running virtual machines, putting all of the VMs on SSDs is cost prohibitively expensive. So instead I’ve opted to use file placement to group VMs that I tend to use at the same time to be placed on separate spinning disks. This allows me to power on and use multiple virtual machines at the same time without oversaturating the disk I/O.

Example

With this setup, I can use virtual machines 1, 2, 3 and 4 at the same time, without incurring a huge disk I/O penalty. The same goes for virtual machines 5, 6, 7 and 8.

In the future, I can add more spinning disks to the pool and reorganize my VMs to let me run more of them at the same time without incurring a disk I/O penalty.

Again, doing this with StableBit DrivePool’s file placement is a breeze and allows me to keep all of my virtual machines at the same path while at the same time gives me the flexibility of future expansion and lets me choose which disks should be used to store each VM.

Coming up Next

Now that the 2.1 release final of StableBit DrivePool is out, StableBit Scanner 2.5.0 is on its way to a release final as well. Coming up within a week the next public BETA of StableBit Scanner 2.5.0 and hopefully soon after that a release final.

After that we’re going to see a public BETA of StableBit DrivePool 2.2.

StableBit DrivePool 2.1.0.553 RC – File Placement and “Product 3”

Posted in StableBit on May 30th, 2014 by alex – 3 Comments

StableBit DrivePool 2.1.0.553 Release Candidate is now available for download.

StableBit DrivePool 2.1.0.553 RC

Download: http://stablebit.com/DrivePool/Download

What’s New

Now that we’re approaching the 2.1 release final and file placement is implemented, I’d like to start talking about “the next big thing” for StableBit.

But before I get into that, let’s check out what’s new in this RC. Since my last post about version 2.1.0.528 BETA there have been a few new features added and a bunch of general fixes, not really related to any single area.

I’ll just highlight the noteworthy changes here.

New Features

  • Finalized all translations.
  • You can now send a test email message after entering your email address for notifications.
  • Minimize / maximize buttons were added to the horizontal UI. Stretch the window wide to see them.

Bug Fixes

  • Fixed “Access denied” when removing drives related to the “System Volume Information” folder.
  • The performance UI was not updating even though it was still open.
  • Renaming in a reparse point folder was not working.
  • Don’t show the “.covefs” folder in the UI.
  • Fixed a real-time file size tracking issue having to do with file overwriting.
  • WPF animations are now capped to 30 FPS on WSS for better performance.

There were a bunch of other fixes as well. You can check out the full changelog here: http://stablebit.com/DrivePool/ChangeLog?Platform=win

The Importance of File Placement

Let’s talk a bit about the future, the future of StableBit, and “the next big thing”.

As a general concept, file placement is the ability to tell StableBit DrivePool which disks are allowed to store files placed in particular folders on the pool. I’ve been planning this concept for some time now. I’ve posted the first Nuts & Bolts discussion on file placement way back in September of 2013. But in that post I’ve only talked about specifically controlling which files go onto which disks. Well… there’s actually a lot more to file placement that will extend the value of StableBit DrivePool tremendously.

“Product 3”

I’ve talked about Product 3 in the past, which is the code name for the next StableBit product. It’s currently in development and actually gives StableBit DrivePool’s file placement feature a much more prominent role.

In order to understand why, let’s talk about how things work right now. Right now, when you add a disk to the pool, a hidden “PoolPart” folder is created on that disk. Any pooled files that need to be stored on that disk are simply stored in that hidden Pool Part folder. So in reality, when you add a disk to the pool, you’re actually adding a Pool Part to the pool, and that Pool Part happens to be stored on a local disk.

I hope that you see where I’m going with this. Product 3 will allow you to add Pool Parts to the pool that are not necessarily stored on physical disks. This is going to open up a whole range of very exciting possibilities.

It’ll be possible to store Pool Parts on virtually anything that you can imagine that can store persistent data. Email servers, FTP, UNC shares, cloud storage might be some examples, all you’ll need is a plugin for which there will be an open API.

In this context, StableBit DrivePool’s file placement will gain a whole new use. It will allow you to define which folders on the pool are stored on which mediums. Moreover, with per-folder duplication, you will be able to specify which specific mediums will store the duplicated file parts, of each folder.

A Standalone Product

I’ve just talked about how Product 3 can be used together with StableBit DrivePool, but really it’s much more than that. It’s going to be a full standalone product with some unique functionality. I’ll be talking about it some more as we get closer to the first public BETA.

StableBit DrivePool 1.3.6 and StableBit Scanner 2.5

Both of these have been in BETA for some time now. I’ll be working on getting these out into release final form soon.

StableBit DrivePool 1.3.6 is a bug fix release and StableBit Scanner 2.5 has some noteworthy new features.

Until next time, and thank you everyone for supporting us.

StableBit DrivePool 2.1.0.528 BETA – File Placement Update

Posted in StableBit on May 5th, 2014 by alex – 2 Comments

StableBit DrivePool 2.1.0.528 is now out. It features some updates to file placement and a bunch of bug fixes.

Build 528

StableBit DrivePool Build 528 BETA

Download it here: http://stablebit.com/DrivePool/Download

After the release of build 503, which introduced the new file placement balancing rules, we’ve had some fruitful discussions over on the forums about how file placement can be improved and this build incorporates some of those requested features. It also features some important bug fixes to file placement.

Folder Placement Changes

Folder Placement Rules

Folder Placement Rules

There are a few UI changes as well as functional changes that have to do with how folder placement rules are applied.

Folder Placement Changes

Folder Placement Changes

Folder Placement Rule Inheritance

As you can see, there are now new icons next to folders that have folder placement rules applied to them. This makes it easy to tell which folders have rules defined on them and which are inheriting rules. The icon is green for a folder that has a folder placement rule defined on it and it’s black if the folder is inheriting a folder placement rule from one of its ancestor folders.

This brings us into a functional change that affects how all file placement rules behave in this build compared to build 503. In build 503, if you defined a folder placement rule on some folder and then defined a different rule on a subfolder of that folder, the rule for the subfolder would get combined with the rule on the parent folder. This is counterintuitive and is really not obvious, so the latest build doesn’t do that anymore. From now on, multiple rules are never combined.

Adding New Drives

There is now a new checkbox that lets you control what happens to each file placement rule when you add new drives. In build 503, adding a new drive would add that drive as a selected destination for each file placement rule that you’ve defined. In the latest build it’s the exact opposite. New drives are never added to your existing file placement rules by default, but you can bring back the old functionality on a rule by rule basis using this new checkbox.

File Placement Changes

File Placement Changes

File Placement Changes

There are some functional and UI changes to the pattern based file placement rule interface.

File Placement Rule Priorities

A very important functional change in this build is that only one file placement rule is ever chosen for each file, when selecting a destination for that file. If multiple patterns match a given path, then the highest priority pattern is chosen. This introduces the concept of rule priorities. In the UI, you can now rearrange your file placement rules by clicking the up / down arrows or with drag and drop.

When arranging your file placement rules, one thing to keep in mind is that priorities are automatically managed for folder based rules (defined under the Folders tab). For folder based rules, rules defined on deeper directory structures must always have a higher priority than any rules defined on shallower directory structures. This is so that the user doesn’t have to think about priorities when defining folder based rules, and their rules work as expected. So you are allowed to rearrange your rules, given that you don’t violate that restriction.

Pattern based rules can be arranged in any order and don’t have the same restriction. If you had any file placement rules defined in build 503, they will be automatically rearranged for you.

Multi Select

File Placement Multiselect

File Placement Multi Select

Just to make setting up and managing many rules easier, you can now select multiple rules (using the standard hold SHIFT, CTRL paradigm)  and make changes over all of them at the same time.

Other Fixes

There are some other noteworthy changes in this build:

  • Tasks such as balancing and background duplication can now be manually aborted or boosted in priority from the pool organization bar.
  • The progress percent on rebalancing and drive removal will now move continuously, even when moving large files.
  • Background duplication is now file placement aware and will try to respect the rules when possible. For example, when enabling duplication on a folder, if that folder has a file placement rule then the second copy of the file will be stored on one of the disks that match the rule (if possible). The same thing goes for disabling duplication. If a file part is violating a file placement rule, it will be cleaned up first.
  • It is now possible to disable a parent folder based rule for a particular subfolder by having all the disks checked on that subfolder.
  • When renaming a file on the pool violates a file placement rule, the service will be notified and a balancing pass will be scheduled.
  • Multiple disk list issues were fixed that were causing the disks list in the main UI to flicker and the tooltips to sometimes disappear.

You can read the full change log here: http://stablebit.com/DrivePool/ChangeLog?Platform=win

This build is now being pushed to everyone who is using the BETA via automatic updates.