My Veeam Report v1.3 with VBR v8 Support

Find the latest version here.

I finally got around to updating My Veeam Report for VBR v8. I waited around long enough to hear other opinions on the upgrade and as far as I can tell it has gone pretty smooth for most folks.  After taking the plunge and playing with some of the new features available in VBR v8, I must say I am quite happy with the result.

My original post regarding My Veeam Report can be found here along with screenshot and credits.

For Veeam v7 users, you can grab My Veeam Report v1.2 here.

Not much has changed in v1.3 aside from VBR v8 support.  I did add a couple of options around saving a copy of the report as it is nice for archiving as well as the option to auto-launch the file which is nice when running manually.

VBR has grown beyond just a simple backup product which this report supports but it still helps give me a nice daily overview of backup activities. As always, let me know what you think.

Script can be downloaded from Here.

17 thoughts on “My Veeam Report v1.3 with VBR v8 Support

  1. Keltec

    Just a quick one, for people that use DD/MM/YYYY format, the above script generates an error for the Get-Date command for the license expiration function. On line 397 just change the array order to 0/1/2 and no problems 🙂

  2. Michael Kroell

    FANASTIC SCRIPT! I have modified it to remove the VMs and just show jobs and a few details. I was wondering if it would be possible to edit the script to show just the last session for each Job. I copied the Warning or Failed jobs to make a Successful Jobs section and would like to ensure that every job only shows up once. So if a job fails, but then succeeds, it will only log the success. I’m using this as a one glance check of 18+ jobs on over 300 VMs.

    1. smasterson Post author

      Should be fairly simple I would think (though I have not tried). In the same way you are showing Successful jobs, you should be able to easily create a new section to include only failed jobs.

      Assuming your successful jobs piece looks a little like this:
      $seshList | ?{$_.Result -eq “Success”}

      Failed jobs would look something like this:
      $seshList | ?{($_.Result -eq “Failed”) -and ($_.WillBeRetried -ne “True”)}

      The only jobs that would not show up in one or the other section should be jobs that are currently running.

      Hope this helps


      1. smasterson Post author

        After thinking about it, my last comment doesn’t take into consideration sessions that ended with warnings. I’m also not sure it was what you were ultimately looking for.

        If you just wanted a list of all jobs and the last result you could do something like this:
        Get-VBRJob | ?{$_.IsBackup} | Select Name, @{N=”Last Result”;E={$_.Info.LatestStatus}}

        This actually may be a nice addition to the report – stay tuned 😉

      2. Michael Kroell


        Thank you very much for the quick reply. If you like I can send you a copy of the modifications I made to put them in to better context. I believe your second reply is exactly what I am trying to do. Is get the latest session of all VBR jobs. This looks like the rest of your script will then runs as normal. If the last session was warning it will go into the warning section, if the last session was successful, then it will go into the successful section.

        Pretty much the ultimate goal is to only gather the latest session before applying any intelligence of your script. This was no matter the outcome, the job will only be listed once.

        Also, seeing how we are on the topic of this, would it be possible to add a section in the top for summary of number of jobs, warnings, successes, etc, to add total number of VMs?

        Once again I greatly appreciate your help in this.

  3. smasterson Post author

    Glad to help where I can!

    What I am envisioning is this – I’m looking into adding a new section that will essentially replicate the view seen from the VBR console. A simple job list that will show Job Name, Type, Status, Last Result, Next Run and Target (I find ‘Objects in job’ to be pretty useless due to my notes below). Will that work for you?

    Regarding the number of VMs – this I think I am going to steer clear from as though it would be possible, it can get a bit tricky due to the fact that jobs can contain entities other than individual VMs, such as datastores, folders, vApps, etc. The only way to accomplish this (that I am aware of) would be to obtain the entity from VBR, let’s say it is a folder, and would then need to query vSphere for all VMs in that folder. Adding the required code to connect to vSphere, though technically not that complicated, adds a bit of complexity and dependencies (would need PowerCli installed, credentials, etc)

    1. Michael Kroell

      That’s a great idea, but for my purposes, I really like what you have built. I really like seeing the end time of all jobs, just not have to see multiple retries.

      As Far as VMs, Your current reports spits out all VMs that have a success backup and one that’s do not. Could you just spit out the number of Success to the top?

      Feel free to email me, let me know if you can’t see the email on the rely.



      1. Michael Kroell

        Sorry forgot one detail. Quick thing I modified, is a variable for width of the report. This make’s it easier to set to ones liking. I wanted to be able to read it easier on the iPhone. 🙂

  4. Oliver Geissler

    Hi there,

    great script! It seems that we get false disk space free Infos if the disk space is in TB Format?
    We have a DataDomain with 17,6 TB free space, but Report doesnt reflect it correctly.
    Look like we should differ between GB and TB when calculating?

    Additionally we do not see our DDBoost Repositories (new v8 feature) , is it possible to extend the Report to get that Information too?


    1. smasterson Post author

      Hi Oliver
      I’m not sure I understand what you mean by ‘if the disk space is in TB format’? I use the script against multiple 10+TB repo’s with no issues.
      Unfortunately I didn’t write the piece of code that looks at the repositories, and honestly as it is using .NET assemblies to retrieve the data, I don’t understand it very well myself.
      Course, the last kicker is that I don’t have a DataDomain to be able to test with…so I’m not sure I’ll be able to extend it for DDBoost Repo’s.
      You may want to look/ask around the Veeam forums to see if anyone else has come up with a solution.

      1. Oliver Geissler


        curiosuly only the DataDomain Shares shows incorrect values, i.e. 16.9TB free / 23.6 Capacity are listed as 4096,00 Free (GB) AND Total (GB). A mapping of a DD share in Windows is reported correctly….
        Okay no DD available, me too, but, by the way how about a Tape Support? 🙂


        1. smasterson Post author

          Hi Oliver
          Now you are only increasing my curiosity regarding DD 🙂 But unfortunately, I have no way of testing against this.
          For the second bummer of the day…I’m going to have to say the same regarding tape drives, as I do not use any to be able to test with.
          The cmdlets for tape drives are certainly there, but without anyway to test the code, I have no way of ensuring the correct output. If someone would like to put together a function or snippet to gather this information, I would be happy to include it in the report.

  5. Chris

    Just noticed, its only seeing 4096GB of my repositories. Both of which are 8TB.
    I havent looked through the code yet, but Veeam seems them correctly in the console.

    1. smasterson Post author

      Hi Chris
      I too recently noticed this when I extended one of my test (cifs) shares beyond 4TB. Are your repos cifs as well?
      This is only partly Veeam’s fault – mainly due to the fact that they provide no way to get this info via PS.
      The original creator of the function got really creative using a dll to obtain the info for Windows and Linux local repos while using standard PS to grab the cifs info. Unfortunately, the Scripting.FileSystemObject used to gather this info is not super accurate for remote (non-Windows) hosts and completely fails when space is over 4TB on a remote share (as far as I can tell – not sure what other factors may be at play).

      I’m not sure I’ll be able to find a good fix for this without Veeam providing better info via their snappin. I’ve recently provided feedback via the forums but wouldn’t expect anything to happen any time super quick.
      I did find some clunky work-arounds but nothing that I could incorporate that would work for everyone.
      As this info is not really coming from Veeam anyway (other than the repo name and path), it may be best to use a separate script/tool to do the job of monitoring cifs repos.

      In v1.4 I am putting a check in place to show ‘unknown’ as opposed to the erroneous info provided when beyond 4TB.

      1. chris

        Yes the repo’s are CIFS shares. I’ll look at using a more native cmdlet for pulling the repository size/freespace.

  6. Sebastian Talmon

    the problem with date formats for license checking on different language Systems could be solved by specifying
    Get-Date -Day $datearray[1] -Month $datearray[0] -Year $datearray[2]
    instead of
    Get-Date $expirationDate


Leave a Reply

Your email address will not be published. Required fields are marked *