Author Archives: jordanj

Orca – Server 2012 Migration

Summary

We will be transitioning the Orca academic file server to Windows 2012. There are a handful of reasons for doing this, including the fact that Orca is currently a 32bit Windows 2008 server and uses Windows dynamic disks for the programs share.  Additionally, once the data is on Server 2012 we can take advantage of data deduplication which should reduce the total disk space consumed by at least 10-20%. This space savings is important as Orca is our largest file server.

Scope

The transition is intended to be as seamless and unnoticeable to end users as possible. The new server will assume the same name and IP address. Much of the data that needs to be copied from the dynamic disk will be copied over beforehand in order to reduce the necessary downtime. At this time I expect the final cutover to require the server to be unavailable for up to 2 hours.  There was a prerequisite for this project of upgrading our ESXi infrastructure to version 5.5.

Project Manager: Josh Jordan

Team: Network Services

  • Josh Jordan

Groups Involved

  • Network Services
  • Academic Computing (to help schedule the cutover and provide notification to students)
  • Client Services (to help schedule the cutover and provide notification to staff/faculty)

 

Start: Feb 2014 (with prerequisite work done in Jan)

Completion: TBD

Plan:

Jan: Upgrade ESXi to version 5.5 in order to support large virtual disks

End of February: Build replacement server

March 10th-24th  – The programs share will be copied (at a slow rate so as not to interfere with normal access) to the new server.

Mar 24th – Apr 11th  – Verify that disk quotas, quota templates, the pfcc program, and the various clients supported at Evergreen can connect properly. Also, make sure that DUC can properly provision and delete home directories.

April-May –Schedule cutover with client services and academic computing. This will be dependent on when downtime is acceptable for the server.

Any updates will be added to the documentation here: \\hurricane\PCCommon\Computing and Communications\Orca_Server2012_Migration

Relocate offsite backups

Summary

We will be moving two of our ExaGrid appliances to the Tacoma campus for storing offsite backups. The ExaGrid replication targets currently reside over in SEM II while the primary ExaGrid backup targets are in the machine room. This required us to first determine how much bandwidth was required to keep replication caught up and then to get the WAN link in Tacoma upgraded to 100Mb.

Scope

The project will not change the backup process – only the destination for offsite backups from another building on campus to the Tacoma campus. This should not impact end users at all.

Project Manager: Josh Jordan

Team: Network Services

  • Josh Jordan
  • Dan Scherck

Groups Involved

  • Network Services

 

Start: March 1st, 2014 (estimated)

Completion: April 4th, 2014 (estimated)

Plan:

Mar 1st-16th – Dan will verify link speed between Olympia and Tacoma campuses and enable QoS rules in order to make sure ExaGrid replication traffic does not impact normal business traffic. He will also configure the necessary switch ports in the Tacoma network closet.

Mar 17th – If Dan’s testing and configuration has completed Josh will notify Albert Villemure at ExaGrid about the upcoming move and coordinate the IP address changes that are needed.

Mar 24th-28th – We will disable throttling on the SEM II link so replications can fully catch up. Then, we will notify the IT System Outage group that offsite backups will be unavailable during the move. Josh will then get keys and physically move the ExaGrid appliances to the Tacoma site and connect them to the network. Once connectivity has been verified Josh will do a test backup and monitor replication. ExaGrid will be notified when the work has been completed so they can monitor the replication as well.

Apr 2nd-4th – Josh will verify that backups are replicating at an acceptable rate to the Tacoma ExaGrid devices.

any updates will be added to the documentation here: \\hurricane\pccommon\Computing and Communications\ExaTac_Migration

Extron GVE deployment

Project Summary: We have built a server and will be configuring some Extron software that will allow EME to centrally manage and monitor the Extron multimedia hardware that is deployed throughout campus.

The software is described to have the following features:

  • Global Viewer Enterprise offers the following features:
  • Server-based, AV system monitoring and resource management software
  • Enterprise-wide scheduling, monitoring and enhanced help desk functionality
  • Supports Extron TouchLink™, IP Link®, MediaLink®, and PoleVault® as well as third-party control systems such as AMX and Crestron
  • Increased integration with third-party facility scheduling software, such as Microsoft Exchange Server and CollegeNET R25
  • Custom helpdesk layouts allow customized help desk views by user account
  • Support for projectors with multiple lamps
  • iGVE app enables on-the-go access to GlobalViewer Enterprise from your iPhone, and iPod touch
  • Extensive 24/7 system data collection with a comprehensive set of management reports
  • Scalability to grow as your AV system installation grows
  • No programming required

Project Leads: Josh Jordan, Rob Rensel

Groups Involved: Network Services, EME

Estimated Completion Date: TBD