Thursday, July 28, 2011

Qs on DMZ in VMWare

Here is a good article explaining DMZ in the virtualization environment.

-----------------------------------------------------------------------------
I was not clear about how to play DMZ in the VM so I posted this question and got the answer from Edward L. Haletky.
Original Post: http://communities.vmware.com/message/1790430#1790430

Answer:
So you have the following:

vminc0 --> a physical switch --> Linksys Internet router --> Internet

Not what I would do, why? Because vmnic0 is often used by the Management Appliance in ESXi or the Service Console in ESX, therefore you rather not do this. THe full picture is....

Mgmt <-> vSwitch0 <-> pNIC (vmnic0) <-> pSwitch <-> Router <-> Outside

What you really want is:

Mgmt/Internal <-> vSwitch0 <-> pNIC (vmnic0,vmnic2) <-> pSwitchI

DMZ  <-> vSwitchD <-> vFW <-> vSwitch1 <-> pNIC (vmnic1) <-> pSwitchE <-> Router <-> Outside

Then I would bridge vSwitch0 and vSwitch2 with a vFW. You really want two physical switches one for DMZ and one for internal. If that is not possible then use VLANs (but I highly recommend a second switch unless you are using high end switches with all sorts of layer-2 protections)

If you want Internal to talk to the DMZ, then the virtual Firewall (vFW) could handle that for you as well, depending on what you use for that firewall. Always add a vFW to protect/segregate the DMZ. vSwitchD in this case is an internal vSwitch that does not have a pNIC connected to it, therefore it is considered private.

 

Edward L. Haletky
Communities Moderator, VMware vExpert,
Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the Enterprise 2nd Edition
Podcast: The Virtualization Security Podcast Resources: The Virtualization Booksh

 

 

 

Tuesday, July 26, 2011

Install Exchange 2010 Notes

If you're installing Exchange 2010 on the Windows Server 2008 R2 operating system, don't use the downloadable .NET Framework package. Instead, use Server Manager in Windows Server 2008 R2 or run ServerManagerCmd -i NET-Frameworkhttp://technet.microsoft.com/en-us/library/dd638130.aspx

Some Key Terminology of Exchange 2010

Database availability group (DAG)
A group of up to 16 Exchange 2010 Mailbox servers that hosts a set of replicated databases.
A DAG is the base component of the high availability and site resilience framework built into Exchange 2010.A DAG is a group of up to 16 Mailbox servers that hosts a set of databases and provides automatic database-level recovery from failures that affect individual databases. Any server in a DAG can host a copy of a mailbox database from any other server in the DAG. When a server is added to a DAG, it works with the other servers in the DAG to provide automatic recovery from failures that affect mailbox databases, such as a disk failure or server failure.Exchange 2007 introduced a built-in data replication technology called continuous replication. Continuous replication, which was available in three forms: local, cluster, and standby, significantly reduced the cost of deploying a highly available Exchange infrastructure, and provided a much improved deployment and management experience over previous versions of Exchange. Even with these cost savings and improvements, however, running a highly available Exchange 2007 infrastructure still required much time and expertise because the integration between Exchange and Windows failover clustering wasn't seamless. In addition, customers wanted an easier way to replicate their e-mail data to a remote location, to protect their Exchange environment against site-level disasters.

Exchange 2010 uses the same continuous replication technology found in Exchange 2007. Exchange 2010 combines on-site data replication (CCR) and off-site data replication (SCR) into a single framework called a database availability group (DAG). After servers are added to a DAG, you can add replicated database copies incrementally (up to 16 total), and Exchange 2010 switches between these copies automatically, to maintain availability.

Unlike Exchange 2007, where clustered mailbox servers required dedicated hardware, Mailbox servers in a DAG can host other Exchange roles (Client Access, Hub Transport, and Unified Messaging), providing full redundancy of Exchange services and data with just two servers.

This new high availability architecture also provides simplified recovery from a variety of failures (disk-level, server-level, and datacenter-level), and the architecture can be deployed on a variety of storage types.

For more information about DAGs, see Understanding Database Availability Groups.

Database mobility
The ability of a single Exchange 2010 mailbox database to be replicated to and mounted on other Exchange 2010 Mailbox servers.
Disaster recovery
Any process used to manually recover from a failure. This can be a failure that affects a single item, or it can be a failure that affects an entire physical location.
High availability
A solution that provides service availability, data availability, and automatic recovery from failures that affect the service or data (such as a network, storage, or server failure).
With the significant core improvements made to Exchange 2010, the recommended maximum mailbox database size when using continuous replication has increased from 200 gigabytes (GB) in Exchange 2007 to 2 terabytes in Exchange 2010. With more companies realizing the greater value in large mailboxes (from 2 GB through 10 GB), significantly larger database sizes can quickly become a reality. Supporting larger databases means moving away from legacy recovery mechanisms, such as backup and restore, and moving to newer, faster forms of protection, such as data replication and server redundancy. Ultimately, the size of your mailbox databases depends on many factors you derive during the Exchange 2010 planning process for. For detailed planning guidance for mailboxes and Mailbox servers, see Mailbox Server Storage Design.
Lagged mailbox database copy
A passive mailbox database copy that has a log replay lag time greater than zero.
Mailbox database copy
A mailbox database (.edb file and logs), which is either active or passive.
Mailbox resiliency
The name of a unified high availability and site resilience solution in Exchange 2010.
Site resilience
A manual disaster recovery process used to activate an alternate or standby datacenter when the primary datacenter is no longer able to provide a sufficient level of service to meet the needs of the organization. Also includes the process of re-activating a primary datacenter that has been recovered, restored or recreated. You can configure your messaging solution for high availability and enable site resilience using the built-in features and functionality in Exchange 2010.

Exchange 2010 STD or ENT

Server: Standard
CAL: Enterprise

Reasons:
Server: 5 Database stores - 200GB max per store
http://www.infotechguyz.com/exchange2010/exchange2010editions.html

CAL:

The functionality of the client is dependent on the CAL and independent of the Server.

  • Integrated Archiving (Ent CAL)

  • Multi-mailbox Search & Legal Hold (Ent CAL)

  • Advanced Journaling (Ent CAL)


Exchange 2010: Editions and Versions
http://technet.microsoft.com/en-us/library/bb232170.aspx

  • No loss of functionality will occur when the Trial Edition expires, so you can maintain lab, demo, training, and other non-production environments beyond 120 days without having to reinstall the Trial Edition of Exchange 2010.

  • You can also use a valid product key to move from Standard Edition to Enterprise Edition.

  • The RTM version of Exchange 2010 is 14.00.0639.021. The SP1 version of Exchange is 14.01.0218.015.


 

Wednesday, July 20, 2011

VPN Clients cannot Ping beyond RRAS Server (DR-Site)

My workaround: manually assign a range of LAN IP in the static route pool.

To create a static IP address pool




  1. Open Routing and Remote Access.

  2. Right-click the server name for which you want to create a static IP address pool, and then click Properties.

  3. On the IP tab, click Static address pool, and then click Add.

  4. In Start IP address, type a starting IP address, and then either type an ending IP address for the range in End IP address or type the number of IP addresses in the range in Number of addresses.

  5. Click OK, and then repeat steps 3 and 4 for as many ranges as you need to add.



This is a good article on troubleshoot this issue.

Thursday, July 14, 2011

Get Service Tag using Command on Dell Systems

type "wmic bios get serialnumber" in the command prompt to retrieve the Service Tag.

Wednesday, July 6, 2011

VMware Lab Setup

Without spending any $$$, I utilized our spare desktops to set up a ESXi 4.1.0 lab. My goal is to having two ESXi hosts and one iSCSI SAN.

Hardwares:


  • Hosts: Two Dell Precision T5400. 4GB memory each.
    It came with the Intel E5405 processor which supports VT and 64-bit. I was using Precision T3400 first and didn't check if it supports VT. Of course, it does not. Check if Intel CPU supports VT here

  • For configuraiton: check this article. (read the section of  Volumes – Important Information (for the clarity of mind)

  • Downloaded and installed OpenFiler. A little bit hard time getting it configured properly till reading this good white paper by John Borhek, VMsources.


 



Notes:

  1. Install ESXi 4.1.0 Build 260247 on each T5400.

    1. Create a VMKerner on the same switch for iSCSI connection.

    2. Select iSCSI Software Adapter in Storage Adapters, go to Properties. Enable "Software Initiator"

    3. Then go to Dynamic Discover and add OpenFiler's IP address as the target.

    4. Go to Storage, you should see the new LUN. If not, click on Add Storage wizard.



  2. Install OpenFiler on Precision 390.



    1. Using auto partition option which is not recommended by the instruction.

    2. Add a secondary hard drive for iSCSI shared storage.



  3. Install vCenter 4.1.0 Build 258902


Tested to migrate vCenter VM between hosts. No problem.

Solution: The size of the extent is less than the minimum in VMware

 I ran out the C drive space in a Windows 10 virtual machine in VMware. However, after adding additional free disk space, the "Extend&q...