ESXi-5.0-U1

This commit is contained in:
unknown 2015-10-23 15:52:36 -04:00
parent 763922b583
commit d0a14f9737
37 changed files with 5460 additions and 1054 deletions

View file

@ -1,5 +1,5 @@
/* ********************************************************** /* **********************************************************
* Copyright 2006 - 2010 VMware, Inc. All rights reserved. * Copyright 2006 - 2011 VMware, Inc. All rights reserved.
* **********************************************************/ * **********************************************************/
/* /*
@ -1543,6 +1543,9 @@ typedef enum {
/** \brief hidden from management apps */ /** \brief hidden from management apps */
VMK_UPLINK_FLAG_HIDDEN = 0x01, VMK_UPLINK_FLAG_HIDDEN = 0x01,
/** This will be set if physical device is being registered as pseudo-dev */
VMK_UPLINK_FLAG_PSEUDO_REG = 0x02,
} vmk_UplinkFlags; } vmk_UplinkFlags;
/** /**

View file

@ -1,5 +1,5 @@
/* ********************************************************** /* **********************************************************
* Copyright 1998 VMware, Inc. All rights reserved. * Copyright 1998 VMware, Inc. All rights reserved. -- VMware Confidential
* **********************************************************/ * **********************************************************/
/* /*

View file

@ -1,6 +1,6 @@
#define BUILD_NUMBER "build-456551" #define BUILD_NUMBER "build-623860"
#define BUILD_NUMBER_NUMERIC 456551 #define BUILD_NUMBER_NUMERIC 623860
#define BUILD_NUMBER_NUMERIC_STRING "456551" #define BUILD_NUMBER_NUMERIC_STRING "623860"
#define PRODUCT_BUILD_NUMBER "product-build-26935" #define PRODUCT_BUILD_NUMBER "product-build-45730"
#define PRODUCT_BUILD_NUMBER_NUMERIC 26935 #define PRODUCT_BUILD_NUMBER_NUMERIC 45730
#define PRODUCT_BUILD_NUMBER_NUMERIC_STRING "26935" #define PRODUCT_BUILD_NUMBER_NUMERIC_STRING "45730"

View file

@ -1,29 +1,19 @@
The following assumes the files disclosed for this package have been Required packages:
copied to the directory "/usr/vmware/src":
rm -rf /usr/vmware/src GNU grep 2.5.1
mkdir /usr/vmware/src GNU sed 4.5.1
cp * /usr/vmware/src GNU xargs 4.2.27
mkdir from GNU coreutils 5.97
And any commands that need to be executed for the disclosure should be These packages are required to be built and installed in their sub-directories,
executed from this directory on a "centos-5.3-x64" system (see the file in this order, as 'root' (see BUILD.txt in each sub-directory):
"SYSTEMS.txt" for definition of this system).
This package should be built on a "centos-5.3-x64" system. Please see the glibc-2.3.2-95.44
"System Configurations" document for a definition of the configuration binutils-2.17.50.0.15-modcall
of this system type. gcc-4.1.2-9
To build this package please execute the following commands: As 'root', build the vmkdrivers-gpl:
tar xzf vmkdrivers-gpl.tgz 1) tar xzf vmkdrivers-gpl.tgz
sh ./build-vmkdrivers.sh
mv collect-drivers.sh BLD/build/collect-drivers.sh
cd BLD/build
./collect-drivers.sh
If you would like to verify the installation of this package, please 2) chmod +x build-vmkdrivers.sh && ./build-vmkdrivers.sh
create the binary disclosure file for this package using the command:
tar cvf /usr/vmware/src/centos-5.3-x64.tar -C /usr/vmware/src/BLD/build drivers -C /usr/vmware/src update-drivers.sh
This file is used in the installation instructions.

View file

@ -1,19 +0,0 @@
The installation of this package should be performed on an ESXi server.
This server should have SSH access enabled: Customize System/View Logs ->
Troubleshooting Options -> Enable SSH
To install the package create a working directory on a datastore on the
ESXi server, e.g., "/vmfs/volumes/Storage1/install" and copy the binary
disclosure created by the build:
scp centos-5.3-x64.tar root@esx.example.org:/vmfs/volumes/Storage1/install/centos-5.3-x64.tar
On the ESXi host, verify the shipped version:
Replace the shipped version with the custom build:
cd /vmfs/volumes/Storage1/install
tar xf centos-5.3-x64.tar
./update-drivers.sh
Reboot the system to load the updated drivers.

6
README
View file

@ -1,10 +1,8 @@
This package contains the source code for the drivers This package contains the source code for the drivers
that are included in the VMware ESX server product. that are included in the VMware ESX server product.
If building from the open source distribution, see BUILD.txt, To build the drivers, execute the 'build-vmkdrivers.sh'
first. script.
Otherwise, review and execute the 'build-vmkdrivers.sh' script.
The following program versions should be used to compile The following program versions should be used to compile
the drivers: the drivers:

View file

@ -1,5 +1,5 @@
/* ********************************************************** /* **********************************************************
* Copyright 2008 VMware, Inc. All rights reserved. * Copyright 2008 VMware, Inc. All rights reserved. -- VMware Confidential
* **********************************************************/ * **********************************************************/
/* /*

681
build-vmkdrivers.sh Normal file → Executable file

File diff suppressed because one or more lines are too long

View file

@ -1,13 +0,0 @@
#! /bin/bash
# Iterate through this directory and copy all of the driver files into a directory
mkdir drivers
for filename in *
do
if [[ "$filename" == vmkdriver* ]]; then
driverName=${filename//vmkdriver-/}
driverName=${driverName//-CUR/}
driverPath=$filename/release/vmkernel64/$driverName
cp $driverPath drivers
fi
done

View file

@ -1,76 +0,0 @@
#!/bin/sh
# Update driver files with contents of "drivers" directory
set -e
TMP=$PWD/tmp
DRIVERS=$PWD/drivers
mkdir -p $TMP
if [ ! -d $DRIVERS ]
then
echo "Could not find \"$DRIVERS\" directory"
exit 1
fi
for driver_path in `find /vmfs -name temp -prune -o -name "*.v0*" -print | sort`
do
echo "+++ Examining $driver_path"
REPLACE_IT=0
basename=`basename $driver_path`
rm -f $TMP/$basename
zcat $driver_path > $TMP/$basename
vmtar -x $TMP/$basename -o $TMP/$basename.tar
rm -rf $TMP/$basename.tmp
mkdir -p $TMP/$basename.tmp
tar -C $TMP/$basename.tmp -xf $TMP/$basename.tar
# For each driver found in the ESXi tarball, see if it
# is in the OSS tarball, and replace it if it is.
if [ -d $TMP/$basename.tmp/usr/lib/vmware/vmkmod ]
then
for driver in `ls $TMP/$basename.tmp/usr/lib/vmware/vmkmod/`
do
repl=$DRIVERS/$driver
if [ -e $repl ]
then
dst=$TMP/$basename.tmp/usr/lib/vmware/vmkmod/$driver
echo Updating $dst with $repl
cp $repl $dst
# we found something to replace
REPLACE_IT=1
fi
done
fi
# If we updated a driver, make a new tarball and move it in
# place.
if [ $REPLACE_IT == 1 ]
then
cd $TMP/$basename.tmp
rm -f $TMP/$basename.new.tar
tar -cf $TMP/$basename.new.tar *
cd $OLDPWD
rm -f $TMP/$basename.new
vmtar -c $TMP/$basename.new.tar -o $TMP/$basename.new
rm -f $TMP/$basename.new.gz
gzip $TMP/$basename.new
echo +++ Replacing $driver_path with $TMP/$basename.new.gz
#echo -n OLD:
#ls -la $driver_path
cp $TMP/$basename.new.gz $driver_path
#echo -n SRC:
#ls -la $TMP/$basename.new.gz
#echo -n NEW:
#ls -la $driver_path
else
echo +++ No updates needed for $driver_path
fi
done

View file

@ -584,6 +584,11 @@ static const struct pci_device_id ahci_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, 0x1d02), board_ahci }, /* PB AHCI */ { PCI_VDEVICE(INTEL, 0x1d02), board_ahci }, /* PB AHCI */
{ PCI_VDEVICE(INTEL, 0x1d04), board_ahci }, /* PB AHCI */ { PCI_VDEVICE(INTEL, 0x1d04), board_ahci }, /* PB AHCI */
{ PCI_VDEVICE(INTEL, 0x1d06), board_ahci }, /* PB AHCI */ { PCI_VDEVICE(INTEL, 0x1d06), board_ahci }, /* PB AHCI */
{ PCI_VDEVICE(INTEL, 0x1e02), board_ahci }, /* Panther Point AHCI */
{ PCI_VDEVICE(INTEL, 0x1e03), board_ahci }, /* Panther Point AHCI */
{ PCI_VDEVICE(INTEL, 0x2321), board_ahci }, /* Cave Creek AHCI */
{ PCI_VDEVICE(INTEL, 0x2323), board_ahci }, /* Cave Creek AHCI */
{ PCI_VDEVICE(INTEL, 0x2326), board_ahci }, /* Cave Creek AHCI */
#endif /* defined(__VMKLNX__) */ #endif /* defined(__VMKLNX__) */
/* JMicron 360/1/3/5/6, match class to avoid IDE function */ /* JMicron 360/1/3/5/6, match class to avoid IDE function */

View file

@ -311,6 +311,14 @@ static const struct pci_device_id piix_pci_tbl[] = {
{ 0x8086, 0x1d00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_ahci }, { 0x8086, 0x1d00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_ahci },
/* SATA Controller IDE (PB) */ /* SATA Controller IDE (PB) */
{ 0x8086, 0x1d08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_ahci }, { 0x8086, 0x1d08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_ahci },
/* SATA Controller IDE (Panther Point) */
{ 0x8086, 0x1e00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_ahci },
/* SATA Controller IDE (Panther Point) */
{ 0x8086, 0x1e01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_ahci },
/* SATA Controller IDE (Panther Point) */
{ 0x8086, 0x1e08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
/* SATA Controller IDE (Panther Point) */
{ 0x8086, 0x1e09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
#endif /* defined(__VMKLNX__) */ #endif /* defined(__VMKLNX__) */
{ } /* terminate list */ { } /* terminate list */

View file

@ -368,6 +368,7 @@ struct e1000_adapter {
#endif // CONFIG_E1000_NAPI #endif // CONFIG_E1000_NAPI
#endif /* defined(__VMKLNX__) */ #endif /* defined(__VMKLNX__) */
bool discarding;
}; };
#define E1000_FLAG_HAS_SMBUS (1 << 0) #define E1000_FLAG_HAS_SMBUS (1 << 0)

View file

@ -4341,13 +4341,21 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
length = le16_to_cpu(rx_desc->length); length = le16_to_cpu(rx_desc->length);
/* !EOP means multiple descriptors were used to store a single /* !EOP means multiple descriptors were used to store a single
* packet, also make sure the frame isn't just CRC only */ * packet, if thats the case we need to toss it. In fact, we
if (unlikely(!(status & E1000_RXD_STAT_EOP) || (length <= 4))) { * to toss every packet with the EOP bit clear and the next
* frame that _does_ have the EOP bit set, as it is by
* definition only a frame fragment
*/
if (unlikely(!(status & E1000_RXD_STAT_EOP)))
adapter->discarding = true;
if (adapter->discarding) {
/* All receives must fit into a single buffer */ /* All receives must fit into a single buffer */
E1000_DBG("%s: Receive packet consumed multiple" E1000_DBG("%s: Receive packet consumed multiple"
" buffers\n", netdev->name); " buffers\n", netdev->name);
/* recycle */ dev_kfree_skb_irq(skb);
buffer_info->skb = skb; if (status & E1000_RXD_STAT_EOP)
adapter->discarding = false;
goto next_desc; goto next_desc;
} }

View file

@ -338,6 +338,8 @@ struct e1000_adapter {
struct work_struct led_blink_task; struct work_struct led_blink_task;
struct work_struct print_hang_task; struct work_struct print_hang_task;
u32 *config_space; u32 *config_space;
bool discarding;
}; };
struct e1000_info { struct e1000_info {

View file

@ -210,7 +210,12 @@ s32 e1000_check_alt_mac_addr_generic(struct e1000_hw *hw)
/* Check for LOM (vs. NIC) or one of two valid mezzanine cards */ /* Check for LOM (vs. NIC) or one of two valid mezzanine cards */
if (!((nvm_data & NVM_COMPAT_LOM) || if (!((nvm_data & NVM_COMPAT_LOM) ||
(hw->adapter->pdev->device == E1000_DEV_ID_82571EB_SERDES_DUAL) || (hw->adapter->pdev->device == E1000_DEV_ID_82571EB_SERDES_DUAL) ||
#if defined(__VMKLNX__)
(hw->adapter->pdev->device == E1000_DEV_ID_82571EB_SERDES_QUAD) ||
(hw->adapter->pdev->device == E1000_DEV_ID_82571EB_SERDES)))
#else /* !defined(__VMKLNX__) */
(hw->adapter->pdev->device == E1000_DEV_ID_82571EB_SERDES_QUAD))) (hw->adapter->pdev->device == E1000_DEV_ID_82571EB_SERDES_QUAD)))
#endif /* defined(__VMKLNX__) */
goto out; goto out;
ret_val = e1000_read_nvm(hw, NVM_ALT_MAC_ADDR_PTR, 1, ret_val = e1000_read_nvm(hw, NVM_ALT_MAC_ADDR_PTR, 1,

View file

@ -776,12 +776,20 @@ static bool e1000_clean_rx_irq(struct e1000_adapter *adapter)
length = le16_to_cpu(rx_desc->length); length = le16_to_cpu(rx_desc->length);
/* !EOP means multiple descriptors were used to store a single /* !EOP means multiple descriptors were used to store a single
* packet, also make sure the frame isn't just CRC only */ * packet, if thats the case we need to toss it. In fact, we
if (!(status & E1000_RXD_STAT_EOP) || (length <= 4)) { * to toss every packet with the EOP bit clear and the next
* frame that _does_ have the EOP bit set, as it is by
* definition only a frame fragment
*/
if (unlikely(!(status & E1000_RXD_STAT_EOP)))
adapter->discarding = true;
if (adapter->discarding) {
/* All receives must fit into a single buffer */ /* All receives must fit into a single buffer */
e_dbg("Receive packet consumed multiple buffers\n"); e_dbg("Receive packet consumed multiple buffers\n");
/* recycle */ dev_kfree_skb_irq(skb);
buffer_info->skb = skb; if (status & E1000_RXD_STAT_EOP)
adapter->discarding = false;
goto next_desc; goto next_desc;
} }
@ -4847,6 +4855,18 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb,
* frags into skb->data * frags into skb->data
*/ */
hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb); hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
#ifdef __VMKLNX__
/*
* The tcp header plus four bytes must fit in the first
* segment otherwise the pnic gets wedged.
*/
if (max_per_txd <= (hdr_len + 4)) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
#endif /* __VMKLNX__ */
/* /*
* we do this workaround for ES2LAN, but it is un-necessary, * we do this workaround for ES2LAN, but it is un-necessary,
* avoiding it could save a lot of cycles * avoiding it could save a lot of cycles

View file

@ -802,6 +802,7 @@ typedef struct sds_host_ring_s {
struct napi_struct napi; struct napi_struct napi;
unsigned int netdev_irq; unsigned int netdev_irq;
char netdev_name[IFNAMSIZ]; char netdev_name[IFNAMSIZ];
uint32_t napi_enable;
#endif #endif
nx_host_sds_ring_t *ring; nx_host_sds_ring_t *ring;
nx_free_rbufs_t free_rbufs[MAX_RX_DESC_RINGS]; nx_free_rbufs_t free_rbufs[MAX_RX_DESC_RINGS];

View file

@ -29,7 +29,7 @@
/* /*
* Source file for NIC routines to access the Phantom hardware * Source file for NIC routines to access the Phantom hardware
* *
* $Id: //depot/vmkdrivers/esx50/src_9/drivers/net/nx_nic/unm_nic_hw.c#1 $ * $Id: //depot/vmkdrivers/esx50u1/src_9/drivers/net/nx_nic/unm_nic_hw.c#1 $
* *
*/ */
#include <linux/delay.h> #include <linux/delay.h>

View file

@ -998,6 +998,7 @@ void nx_napi_enable(struct unm_adapter_s *adapter)
for(ring = 0; ring < num_sds_rings; ring++) { for(ring = 0; ring < num_sds_rings; ring++) {
napi_enable( &adapter->host_sds_rings[ring].napi); napi_enable( &adapter->host_sds_rings[ring].napi);
adapter->host_sds_rings[ring].napi_enable = 1;
} }
} }
@ -1012,7 +1013,10 @@ void nx_napi_disable(struct unm_adapter_s *adapter)
int ring; int ring;
for(ring = 0; ring < num_sds_rings; ring++) { for(ring = 0; ring < num_sds_rings; ring++) {
napi_disable(&adapter->host_sds_rings[ring].napi); if(adapter->host_sds_rings[ring].napi_enable == 1) {
napi_disable(&adapter->host_sds_rings[ring].napi);
adapter->host_sds_rings[ring].napi_enable = 0;
}
} }
} }
#elif defined(UNM_NIC_NAPI) #elif defined(UNM_NIC_NAPI)
@ -3687,6 +3691,10 @@ int nx_nic_multictx_get_filter_count(struct net_device *netdev, int queue_type)
U32 max; U32 max;
U32 rcode; U32 rcode;
if(adapter->is_up == FW_DEAD) {
return -1;
}
nx_dev = adapter->nx_dev; nx_dev = adapter->nx_dev;
pci_func = adapter->nx_dev->pci_func; pci_func = adapter->nx_dev->pci_func;
@ -3713,6 +3721,7 @@ int nx_nic_multictx_get_filter_count(struct net_device *netdev, int queue_type)
struct napi_struct * nx_nic_multictx_get_napi(struct net_device *netdev , int queue_id) struct napi_struct * nx_nic_multictx_get_napi(struct net_device *netdev , int queue_id)
{ {
struct unm_adapter_s *adapter = netdev_priv(netdev); struct unm_adapter_s *adapter = netdev_priv(netdev);
if(queue_id < 0 || queue_id > adapter->num_rx_queues) if(queue_id < 0 || queue_id > adapter->num_rx_queues)
return NULL; return NULL;
return &(adapter->host_sds_rings[queue_id].napi); return &(adapter->host_sds_rings[queue_id].napi);
@ -3721,6 +3730,7 @@ struct napi_struct * nx_nic_multictx_get_napi(struct net_device *netdev , int qu
int nx_nic_multictx_get_ctx_count(struct net_device *netdev, int queue_type) int nx_nic_multictx_get_ctx_count(struct net_device *netdev, int queue_type)
{ {
U32 max = 0; U32 max = 0;
#if 0 #if 0
nx_host_nic_t* nx_dev; nx_host_nic_t* nx_dev;
U32 pci_func ; U32 pci_func ;
@ -3748,6 +3758,11 @@ int nx_nic_multictx_get_ctx_count(struct net_device *netdev, int queue_type)
#endif #endif
#ifdef ESX #ifdef ESX
struct unm_adapter_s *adapter = netdev_priv(netdev); struct unm_adapter_s *adapter = netdev_priv(netdev);
if(adapter->is_up == FW_DEAD) {
return -1;
}
max = adapter->num_rx_queues ; max = adapter->num_rx_queues ;
#endif #endif
return (max); return (max);
@ -3757,6 +3772,7 @@ int nx_nic_multictx_get_queue_vector(struct net_device *netdev, int qid)
{ {
#ifdef ESX #ifdef ESX
struct unm_adapter_s *adapter = netdev_priv(netdev); struct unm_adapter_s *adapter = netdev_priv(netdev);
if(qid < 0 || qid > adapter->num_rx_queues) if(qid < 0 || qid > adapter->num_rx_queues)
return -1; return -1;
return adapter->msix_entries[qid].vector ; return adapter->msix_entries[qid].vector ;
@ -3775,6 +3791,9 @@ int nx_nic_multictx_get_ctx_stats(struct net_device *netdev, int ctx_id,
rds_host_ring_t *host_rds_ring = NULL; rds_host_ring_t *host_rds_ring = NULL;
int ring; int ring;
#ifdef ESX #ifdef ESX
if(adapter->is_up == FW_DEAD) {
return -1;
}
if(ctx_id < 0 || ctx_id > adapter->num_rx_queues) if(ctx_id < 0 || ctx_id > adapter->num_rx_queues)
return -1; return -1;
#endif #endif
@ -4034,6 +4053,10 @@ int nx_nic_multictx_alloc_rx_ctx(struct net_device *netdev)
int err = 0; int err = 0;
nx_host_rx_ctx_t *nxhal_host_rx_ctx = NULL; nx_host_rx_ctx_t *nxhal_host_rx_ctx = NULL;
struct unm_adapter_s *adapter = netdev_priv(netdev); struct unm_adapter_s *adapter = netdev_priv(netdev);
if(adapter->is_up == FW_DEAD) {
return -1;
}
ctx_id = nx_nic_create_rx_ctx(netdev); ctx_id = nx_nic_create_rx_ctx(netdev);
if( ctx_id >= 0 ) { if( ctx_id >= 0 ) {
nxhal_host_rx_ctx = adapter->nx_dev->rx_ctxs[ctx_id]; nxhal_host_rx_ctx = adapter->nx_dev->rx_ctxs[ctx_id];
@ -4137,6 +4160,9 @@ int nx_nic_multictx_free_rx_ctx(struct net_device *netdev, int ctx_id)
nx_host_rx_ctx_t *nxhal_host_rx_ctx = NULL; nx_host_rx_ctx_t *nxhal_host_rx_ctx = NULL;
struct unm_adapter_s *adapter = netdev_priv(netdev); struct unm_adapter_s *adapter = netdev_priv(netdev);
if(adapter->is_up == FW_DEAD) {
return -1;
}
if (ctx_id > adapter->nx_dev->alloc_rx_ctxs) { if (ctx_id > adapter->nx_dev->alloc_rx_ctxs) {
nx_nic_print4(adapter, "%s: Invalid context id\n", nx_nic_print4(adapter, "%s: Invalid context id\n",
__FUNCTION__); __FUNCTION__);
@ -4187,6 +4213,9 @@ int nx_nic_multictx_set_rx_rule(struct net_device *netdev, int ctx_id, char* mac
nx_rx_rule_t *rx_rule = NULL; nx_rx_rule_t *rx_rule = NULL;
struct unm_adapter_s *adapter = netdev_priv(netdev); struct unm_adapter_s *adapter = netdev_priv(netdev);
if(adapter->is_up == FW_DEAD) {
return -1;
}
if (adapter->fw_v34) { if (adapter->fw_v34) {
nx_nic_print3(adapter, "%s: does not support in FW V3.4\n", __FUNCTION__); nx_nic_print3(adapter, "%s: does not support in FW V3.4\n", __FUNCTION__);
return -1; return -1;
@ -4247,6 +4276,9 @@ int nx_nic_multictx_remove_rx_rule(struct net_device *netdev, int ctx_id, int ru
struct unm_adapter_s *adapter = netdev_priv(netdev); struct unm_adapter_s *adapter = netdev_priv(netdev);
char mac_addr[6]; char mac_addr[6];
if(adapter->is_up == FW_DEAD) {
return -1;
}
if (adapter->fw_v34) { if (adapter->fw_v34) {
nx_nic_print3(adapter, "%s: does not support in FW V3.4\n", __FUNCTION__); nx_nic_print3(adapter, "%s: does not support in FW V3.4\n", __FUNCTION__);
return -1; return -1;
@ -6106,6 +6138,7 @@ int netxen_nic_attach_all_ports(struct unm_adapter_s *adapter)
{ {
struct pci_dev *dev; struct pci_dev *dev;
int prev_lro_state; int prev_lro_state;
int rv = 0;
#ifdef ESX #ifdef ESX
int bus_id = adapter->pdev->bus->number; int bus_id = adapter->pdev->bus->number;
list_for_each_entry(dev, &pci_devices, global_list) { list_for_each_entry(dev, &pci_devices, global_list) {
@ -6129,13 +6162,17 @@ int netxen_nic_attach_all_ports(struct unm_adapter_s *adapter)
} }
curr_adapter->lro.enabled = prev_lro_state; curr_adapter->lro.enabled = prev_lro_state;
if(curr_adapter->state == PORT_UP) { if(curr_adapter->state == PORT_UP) {
unm_nic_attach(curr_adapter); int err = unm_nic_attach(curr_adapter);
if (err != 0) {
nx_nic_print3(curr_adapter, "Failed to attach device\n");
rv = err;
}
} }
nx_reset_netq_rx_queues(curr_netdev); nx_reset_netq_rx_queues(curr_netdev);
netif_device_attach(curr_netdev); netif_device_attach(curr_netdev);
} }
} }
return 0; return rv;
} }
int check_fw_reset_failure(struct unm_adapter_s *adapter) int check_fw_reset_failure(struct unm_adapter_s *adapter)
{ {
@ -6265,11 +6302,24 @@ int netxen_nic_detach_all_ports(struct unm_adapter_s *adapter, int fw_health)
} }
static void nx_nic_halt_firmware(struct unm_adapter_s *adapter)
{
NXWR32(adapter, UNM_CRB_PEG_NET_0 + 0x3c, 1);
NXWR32(adapter, UNM_CRB_PEG_NET_1 + 0x3c, 1);
NXWR32(adapter, UNM_CRB_PEG_NET_2 + 0x3c, 1);
NXWR32(adapter, UNM_CRB_PEG_NET_3 + 0x3c, 1);
NXWR32(adapter, UNM_CRB_PEG_NET_4 + 0x3c, 1);
nx_nic_print2(adapter, "Firmwre is halted.\n");
}
static void unm_watchdog_task_fw_reset(TASK_PARAM adapid) static void unm_watchdog_task_fw_reset(TASK_PARAM adapid)
{ {
u32 old_alive_counter, rv, fw_reset; u32 old_alive_counter, rv, fw_reset;
u32 failure_type, return_address; u32 failure_type, return_address;
int err = 0;
#if defined(__VMKLNX__) || LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,20) #if defined(__VMKLNX__) || LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,20)
struct unm_adapter_s *adapter = container_of(adapid, struct unm_adapter_s *adapter = container_of(adapid,
struct unm_adapter_s, struct unm_adapter_s,
@ -6316,6 +6366,7 @@ static void unm_watchdog_task_fw_reset(TASK_PARAM adapid)
if(check_fw_reset_failure(adapter)) { if(check_fw_reset_failure(adapter)) {
nx_nic_print3(adapter,"\nFW reset failed."); nx_nic_print3(adapter,"\nFW reset failed.");
nx_nic_print3(adapter,"\nPlease unload and load the driver.\n"); nx_nic_print3(adapter,"\nPlease unload and load the driver.\n");
nx_nic_halt_firmware(adapter);
module_put(THIS_MODULE); module_put(THIS_MODULE);
goto out2; goto out2;
} }
@ -6326,8 +6377,17 @@ static void unm_watchdog_task_fw_reset(TASK_PARAM adapid)
NXWR32(adapter,UNM_FW_RESET, 0); NXWR32(adapter,UNM_FW_RESET, 0);
api_unlock(adapter); api_unlock(adapter);
if(!rv) if(!rv) {
netxen_nic_attach_all_ports(adapter); err = netxen_nic_attach_all_ports(adapter);
if(err) {
nx_nic_print3(adapter,"FW reset failed.\n");
nx_nic_print3(adapter,"Please unload and load the driver.\n");
netxen_nic_detach_all_ports(adapter, 1);
nx_nic_halt_firmware(adapter);
module_put(THIS_MODULE);
goto out2;
}
}
netxen_nic_reset_tx_timeout(adapter); netxen_nic_reset_tx_timeout(adapter);
module_put(THIS_MODULE); module_put(THIS_MODULE);
goto out; goto out;
@ -6363,6 +6423,7 @@ static void unm_watchdog_task_fw_reset(TASK_PARAM adapid)
if(check_fw_reset_failure(adapter)) { if(check_fw_reset_failure(adapter)) {
nx_nic_print3(adapter,"\nFW reset failed."); nx_nic_print3(adapter,"\nFW reset failed.");
nx_nic_print3(adapter,"\nPlease unload and load the driver.\n"); nx_nic_print3(adapter,"\nPlease unload and load the driver.\n");
nx_nic_halt_firmware(adapter);
module_put(THIS_MODULE); module_put(THIS_MODULE);
goto out2; goto out2;
} }
@ -6372,6 +6433,7 @@ static void unm_watchdog_task_fw_reset(TASK_PARAM adapid)
netxen_nic_detach_all_ports(adapter, 1); netxen_nic_detach_all_ports(adapter, 1);
nx_nic_print3(adapter,"\nFatal Error."); nx_nic_print3(adapter,"\nFatal Error.");
nx_nic_print3(adapter,"\nPlease unload and load the driver.\n"); nx_nic_print3(adapter,"\nPlease unload and load the driver.\n");
nx_nic_halt_firmware(adapter);
module_put(THIS_MODULE); module_put(THIS_MODULE);
goto out2; goto out2;
} }
@ -6383,8 +6445,17 @@ static void unm_watchdog_task_fw_reset(TASK_PARAM adapid)
NXWR32(adapter,UNM_FW_RESET, 0); NXWR32(adapter,UNM_FW_RESET, 0);
api_unlock(adapter); api_unlock(adapter);
if(!rv) if(!rv) {
netxen_nic_attach_all_ports(adapter); err = netxen_nic_attach_all_ports(adapter);
if(err) {
nx_nic_print3(adapter,"FW reset failed.\n");
nx_nic_print3(adapter,"Please unload and load the driver.\n");
netxen_nic_detach_all_ports(adapter, 1);
nx_nic_halt_firmware(adapter);
module_put(THIS_MODULE);
goto out2;
}
}
module_put(THIS_MODULE); module_put(THIS_MODULE);
} }
} }
@ -6440,9 +6511,14 @@ static void unm_tx_timeout_task(TASK_PARAM adapid)
unsigned long flags; unsigned long flags;
int fw_reset_count; int fw_reset_count;
if(adapter->is_up == FW_DEAD) {
return;
}
spin_lock_irqsave(&adapter->lock, flags); spin_lock_irqsave(&adapter->lock, flags);
adapter->tx_timeout_count ++; adapter->tx_timeout_count ++;
if(adapter->tx_timeout_count > UNM_FW_RESET_THRESHOLD) { if((adapter->tx_timeout_count > UNM_FW_RESET_THRESHOLD) ||
(adapter->attach_flag == 0)) {
api_lock(adapter); api_lock(adapter);
fw_reset_count = NXRD32(adapter,UNM_FW_RESET); fw_reset_count = NXRD32(adapter,UNM_FW_RESET);
if(fw_reset_count != 1) { if(fw_reset_count != 1) {
@ -6454,9 +6530,15 @@ static void unm_tx_timeout_task(TASK_PARAM adapid)
} }
read_lock(&adapter->adapter_lock); read_lock(&adapter->adapter_lock);
unm_nic_disable_int(adapter, &adapter->nx_dev->rx_ctxs[0]->sds_rings[0]); if(adapter->nx_dev->rx_ctxs[0] != NULL) {
unm_nic_disable_int(adapter, &adapter->nx_dev->rx_ctxs[0]->sds_rings[0]);
}
adapter->netdev->trans_start = jiffies; adapter->netdev->trans_start = jiffies;
unm_nic_enable_int(adapter, &adapter->nx_dev->rx_ctxs[0]->sds_rings[0]);
if(adapter->nx_dev->rx_ctxs[0] != NULL) {
unm_nic_enable_int(adapter, &adapter->nx_dev->rx_ctxs[0]->sds_rings[0]);
}
read_unlock(&adapter->adapter_lock); read_unlock(&adapter->adapter_lock);
netif_wake_queue(adapter->netdev); netif_wake_queue(adapter->netdev);

View file

@ -1,5 +1,5 @@
/* /*
* Portions Copyright 2008-2010 VMware, Inc. * Portions Copyright 2008-2011 VMware, Inc.
*/ */
/* /*
* Adaptec AAC series RAID controller driver * Adaptec AAC series RAID controller driver
@ -621,6 +621,9 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
#endif /* defined(__VMKLNX__) */ #endif /* defined(__VMKLNX__) */
} }
} }
/* now call done */
cmd->result = DID_ABORT << 16;
cmd->scsi_done(cmd);
break; break;
case TEST_UNIT_READY: case TEST_UNIT_READY:
/* Mark associated FIB to not complete, eh handler does this */ /* Mark associated FIB to not complete, eh handler does this */
@ -645,6 +648,9 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
} }
} }
#if defined(__VMKLNX__) #if defined(__VMKLNX__)
/* now call done */
cmd->result = DID_ABORT << 16;
cmd->scsi_done(cmd);
break; break;
default: default:
{ {

View file

@ -129,6 +129,7 @@ static int fcoe_fip_recv(struct sk_buff *, struct net_device *,
struct packet_type *, struct net_device *); struct packet_type *, struct net_device *);
#endif /* !defined(__VMKLNX__) */ #endif /* !defined(__VMKLNX__) */
static inline int fcoe_start_io(struct sk_buff *);
static void fcoe_fip_send(struct fcoe_ctlr *, struct sk_buff *); static void fcoe_fip_send(struct fcoe_ctlr *, struct sk_buff *);
static void fcoe_update_src_mac(struct fc_lport *, u8 *); static void fcoe_update_src_mac(struct fc_lport *, u8 *);
static u8 *fcoe_get_src_mac(struct fc_lport *); static u8 *fcoe_get_src_mac(struct fc_lport *);
@ -620,7 +621,11 @@ static int fcoe_fip_recv(struct sk_buff *skb, struct net_device *netdev,
static void fcoe_fip_send(struct fcoe_ctlr *fip, struct sk_buff *skb) static void fcoe_fip_send(struct fcoe_ctlr *fip, struct sk_buff *skb)
{ {
skb->dev = fcoe_from_ctlr(fip)->netdev; skb->dev = fcoe_from_ctlr(fip)->netdev;
dev_queue_xmit(skb);
if (fcoe_start_io(skb)) {
struct fc_lport *lport = fip->lp;
fcoe_check_wait_queue(lport, skb);
}
} }
/** /**

View file

@ -1416,10 +1416,14 @@ static void fcoe_ctlr_timeout(unsigned long arg)
*/ */
static inline int fcoe_ctlr_l2_link_ok(struct fc_lport *lport) static inline int fcoe_ctlr_l2_link_ok(struct fc_lport *lport)
{ {
struct net_device *netdev = lport->tt.get_cna_netdev(lport); struct net_device *netdev = NULL;
VMK_ASSERT(netdev);
if ((netdev->flags & IFF_UP) && netif_carrier_ok(netdev)) { if (lport->tt.get_cna_netdev) {
netdev = lport->tt.get_cna_netdev(lport);
VMK_ASSERT(netdev);
}
if (netdev && (netdev->flags & IFF_UP) && netif_carrier_ok(netdev)) {
return TRUE; return TRUE;
} }
@ -1456,7 +1460,11 @@ static void fcoe_ctlr_timer_work(struct work_struct *work)
* For ESX, we drive VLAN discovery directly as we have * For ESX, we drive VLAN discovery directly as we have
* no userland agent for this purpose. * no userland agent for this purpose.
*/ */
fcoe_ctlr_vlan_request(fip); if (fip->vlan_id == FCOE_FIP_NO_VLAN_DISCOVERY) {
fcoe_ctlr_solicit(fip, NULL);
} else {
fcoe_ctlr_vlan_request(fip);
}
} else if (fcoe_ctlr_l2_link_ok(fip->lp) && (0 == fip->vlan_id)) { } else if (fcoe_ctlr_l2_link_ok(fip->lp) && (0 == fip->vlan_id)) {
/* If L2 link is up, keep retrying VLAN discovery */ /* If L2 link is up, keep retrying VLAN discovery */
LIBFCOE_FIP_DBG(fip, "host%u: FIP VLAN ID unavail. " LIBFCOE_FIP_DBG(fip, "host%u: FIP VLAN ID unavail. "
@ -1657,6 +1665,8 @@ static int fcoe_ctlr_vlan_request(struct fcoe_ctlr *fip)
struct net_device *netdev; struct net_device *netdev;
VMK_ASSERT(fip->vlan_id != FCOE_FIP_NO_VLAN_DISCOVERY);
skb = dev_alloc_skb(sizeof(*vlan_req)); skb = dev_alloc_skb(sizeof(*vlan_req));
if (!skb) { if (!skb) {
LIBFCOE_FIP_DBG(fip, "Cannot allocate skb\n"); LIBFCOE_FIP_DBG(fip, "Cannot allocate skb\n");
@ -1691,6 +1701,7 @@ static int fcoe_ctlr_vlan_request(struct fcoe_ctlr *fip)
if (vmklnx_cna_set_vlan_tag(netdev, 0) == -1) { if (vmklnx_cna_set_vlan_tag(netdev, 0) == -1) {
LIBFCOE_FIP_DBG(fip, "%s: vmklnx_cna_set_vlan_tag() failed\n", LIBFCOE_FIP_DBG(fip, "%s: vmklnx_cna_set_vlan_tag() failed\n",
__FUNCTION__); __FUNCTION__);
kfree_skb(skb);
return -1; return -1;
} }

File diff suppressed because it is too large Load diff

View file

@ -1,18 +1,31 @@
/* /*
* Portions Copyright 2008 VMware, Inc. * Linux MegaRAID driver for SAS based RAID controllers
*/
/*
* *
* Linux MegaRAID driver for SAS based RAID controllers * Copyright (c) 2009-2011 LSI Corporation.
* Portions Copyright 2008 VMware, Inc.
* *
* Copyright (c) 2003-2005 LSI Logic Corporation. * This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
* *
* This program is free software; you can redistribute it and/or * This program is distributed in the hope that it will be useful,
* modify it under the terms of the GNU General Public License * but WITHOUT ANY WARRANTY; without even the implied warranty of
* as published by the Free Software Foundation; either version * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* 2 of the License, or (at your option) any later version. * GNU General Public License for more details.
* *
* FILE : megaraid_sas.h * You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* FILE: megaraid_sas.h
*
* Authors: LSI Corporation
*
* Send feedback to: <megaraidlinux@lsi.com>
*
* Mail to: LSI Corporation, 1621 Barber Lane, Milpitas, CA 95035
* ATTN: Linuxraid
*/ */
#ifndef LSI_MEGARAID_SAS_H #ifndef LSI_MEGARAID_SAS_H
@ -21,22 +34,24 @@
/* /*
* MegaRAID SAS Driver meta data * MegaRAID SAS Driver meta data
*/ */
#define MEGASAS_VERSION "00.00.04.32.1vmw" #define MEGASAS_VERSION "00.00.05.34-1vmw"
#define MEGASAS_RELDATE "July 16, 2010" #define MEGASAS_RELDATE "May 02, 2011"
#define MEGASAS_EXT_VERSION "Fri. July 16 16:13:02 EST 2010" #define MEGASAS_EXT_VERSION "Mon. May 2 17:00:00 PDT 2011"
/* /*
* Device IDs * Device IDs
*/ */
#define PCI_DEVICE_ID_LSI_SAS1078R 0x0060 // Dell PERC6i #define PCI_DEVICE_ID_LSI_SAS1078R 0x0060
#define PCI_DEVICE_ID_LSI_SAS1078DE 0x007C #define PCI_DEVICE_ID_LSI_SAS1078DE 0x007C
#define PCI_DEVICE_ID_LSI_VERDE_ZCR 0x0413 #define PCI_DEVICE_ID_LSI_VERDE_ZCR 0x0413
#define PCI_DEVICE_ID_LSI_SAS1078GEN2 0x0078 #define PCI_DEVICE_ID_LSI_SAS1078GEN2 0x0078
#define PCI_DEVICE_ID_LSI_SAS0079GEN2 0x0079 #define PCI_DEVICE_ID_LSI_SAS0079GEN2 0x0079
#define PCI_DEVICE_ID_LSI_SAS0073SKINNY 0x0073 #define PCI_DEVICE_ID_LSI_SAS0073SKINNY 0x0073
#define PCI_DEVICE_ID_LSI_SAS0071SKINNY 0x0071 #define PCI_DEVICE_ID_LSI_SAS0071SKINNY 0x0071
#define PCI_DEVICE_ID_LSI_FUSION 0x005b
/* /*
* ===================================== * =====================================
@ -65,7 +80,8 @@
#define MFI_STATE_READY 0xB0000000 #define MFI_STATE_READY 0xB0000000
#define MFI_STATE_OPERATIONAL 0xC0000000 #define MFI_STATE_OPERATIONAL 0xC0000000
#define MFI_STATE_FAULT 0xF0000000 #define MFI_STATE_FAULT 0xF0000000
#define MFI_RESET_REQUIRED 0x00000001 #define MFI_RESET_REQUIRED 0x00000001
#define MFI_RESET_ADAPTER 0x00000002
#define MEGAMFI_FRAME_SIZE 64 #define MEGAMFI_FRAME_SIZE 64
@ -131,6 +147,7 @@
#define MFI_CMD_STP 0x08 #define MFI_CMD_STP 0x08
#define MR_DCMD_CTRL_GET_INFO 0x01010000 #define MR_DCMD_CTRL_GET_INFO 0x01010000
#define MR_DCMD_LD_GET_LIST 0x03010000
#define MR_DCMD_CTRL_CACHE_FLUSH 0x01101000 #define MR_DCMD_CTRL_CACHE_FLUSH 0x01101000
#define MR_FLUSH_CTRL_CACHE 0x01 #define MR_FLUSH_CTRL_CACHE 0x01
@ -150,6 +167,8 @@
#define MR_DCMD_CLUSTER_RESET_LD 0x08010200 #define MR_DCMD_CLUSTER_RESET_LD 0x08010200
#define MR_DCMD_PD_LIST_QUERY 0x02010100 #define MR_DCMD_PD_LIST_QUERY 0x02010100
#define MR_DCMD_CTRL_IO_METRICS_GET 0x01170200 // get IO metrics
#define MR_EVT_CFG_CLEARED 0x0004 #define MR_EVT_CFG_CLEARED 0x0004
#define MR_EVT_LD_STATE_CHANGE 0x0051 #define MR_EVT_LD_STATE_CHANGE 0x0051
@ -160,6 +179,8 @@
#define MR_EVT_FOREIGN_CFG_IMPORTED 0x00db #define MR_EVT_FOREIGN_CFG_IMPORTED 0x00db
#define MR_EVT_LD_OFFLINE 0x00fc #define MR_EVT_LD_OFFLINE 0x00fc
#define MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED 0x0152 #define MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED 0x0152
#define MR_EVT_CTRL_PERF_COLLECTION 0x017e
#define MAX_LOGICAL_DRIVES 64 #define MAX_LOGICAL_DRIVES 64
@ -321,6 +342,7 @@ enum MR_PD_QUERY_TYPE {
#define MR_EVT_FOREIGN_CFG_IMPORTED 0x00db #define MR_EVT_FOREIGN_CFG_IMPORTED 0x00db
#define MR_EVT_LD_OFFLINE 0x00fc #define MR_EVT_LD_OFFLINE 0x00fc
#define MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED 0x0152 #define MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED 0x0152
#define MAX_LOGICAL_DRIVES 64
enum MR_PD_STATE { enum MR_PD_STATE {
MR_PD_STATE_UNCONFIGURED_GOOD = 0x00, MR_PD_STATE_UNCONFIGURED_GOOD = 0x00,
@ -377,6 +399,33 @@ struct megasas_pd_list {
u8 driveState; u8 driveState;
} __attribute__ ((packed)); } __attribute__ ((packed));
/*
* defines the logical drive reference structure
*/
typedef union _MR_LD_REF { // LD reference structure
struct {
u8 targetId; // LD target id (0 to MAX_TARGET_ID)
u8 reserved; // reserved to make in line with MR_PD_REF
u16 seqNum; // Sequence Number
};
u32 ref; // shorthand reference to full 32-bits
} MR_LD_REF; // 4 bytes
/*
* defines the logical drive list structure
*/
struct MR_LD_LIST {
u32 ldCount; // number of LDs
u32 reserved; // pad to 8-byte boundary
struct {
MR_LD_REF ref; // LD reference
u8 state; // current LD state (MR_LD_STATE)
u8 reserved[3]; // pad to 8-byte boundary
u64 size; // LD size
} ldList[MAX_LOGICAL_DRIVES];
} __attribute__ ((packed));
// //
/* /*
* SAS controller properties * SAS controller properties
@ -424,7 +473,7 @@ struct megasas_ctrl_prop {
u32 prCorrectUnconfiguredAreas : 1; u32 prCorrectUnconfiguredAreas : 1;
u32 useFdeOnly : 1; u32 useFdeOnly : 1;
u32 disableNCQ : 1; u32 disableNCQ : 1;
u32 SSDSMARTerEnabled : 1; u32 SSDSMARTerEnabled : 1;
u32 SSDPatrolReadEnabled : 1; u32 SSDPatrolReadEnabled : 1;
u32 enableSpinDownUnconfigured : 1; u32 enableSpinDownUnconfigured : 1;
u32 autoEnhancedImport : 1; u32 autoEnhancedImport : 1;
@ -705,7 +754,9 @@ struct megasas_ctrl_info {
#define MEGASAS_DEFAULT_CMD_PER_LUN 128 #define MEGASAS_DEFAULT_CMD_PER_LUN 128
#define MEGASAS_MAX_PD (2 * \ #define MEGASAS_MAX_PD (2 * \
MEGASAS_MAX_DEV_PER_CHANNEL) MEGASAS_MAX_DEV_PER_CHANNEL)
#define MEGASAS_MAX_LD_IDS (MEGASAS_MAX_LD_CHANNELS * \
MEGASAS_MAX_DEV_PER_CHANNEL)
#define MEGASAS_MAX_NAME 32
#define MEGASAS_MAX_SECTORS_IEEE (2*128) #define MEGASAS_MAX_SECTORS_IEEE (2*128)
#define MEGASAS_DBG_LVL 1 #define MEGASAS_DBG_LVL 1
#define MEGASAS_FW_BUSY 1 #define MEGASAS_FW_BUSY 1
@ -763,8 +814,6 @@ struct megasas_ctrl_info {
#define MFI_GEN2_ENABLE_INTERRUPT_MASK 0x00000001 #define MFI_GEN2_ENABLE_INTERRUPT_MASK 0x00000001
#define MFI_REPLY_SKINNY_MESSAGE_INTERRUPT 0x40000000 #define MFI_REPLY_SKINNY_MESSAGE_INTERRUPT 0x40000000
#define MFI_SKINNY_ENABLE_INTERRUPT_MASK (0x00000001) #define MFI_SKINNY_ENABLE_INTERRUPT_MASK (0x00000001)
#define MFI_1068_PCSR_OFFSET 0x84 #define MFI_1068_PCSR_OFFSET 0x84
#define MFI_1068_FW_HANDSHAKE_OFFSET 0x64 #define MFI_1068_FW_HANDSHAKE_OFFSET 0x64
#define MFI_1068_FW_READY 0xDDDD0000 #define MFI_1068_FW_READY 0xDDDD0000
@ -777,11 +826,10 @@ struct megasas_ctrl_info {
*/ */
struct megasas_register_set { struct megasas_register_set {
u32 reserved_0; /*0000h*/ u32 doorbell; /*0000h*/
u32 fusion_seq_offset; /*0008h*/ u32 fusion_seq_offset; /*0004h*/
u32 fusion_host_diag; /*0004h*/ u32 fusion_host_diag; /*0008h*/
u32 reserved_01; /*000Ch*/ u32 reserved_01; /*000Ch*/
u32 inbound_msg_0; /*0010h*/ u32 inbound_msg_0; /*0010h*/
u32 inbound_msg_1; /*0014h*/ u32 inbound_msg_1; /*0014h*/
@ -801,15 +849,18 @@ struct megasas_register_set {
u32 inbound_queue_port; /*0040h*/ u32 inbound_queue_port; /*0040h*/
u32 outbound_queue_port; /*0044h*/ u32 outbound_queue_port; /*0044h*/
u32 reserved_2[22]; /*0048h*/ u32 reserved_2[9]; /*0048h*/
u32 reply_post_host_index; /*006Ch*/
u32 reserved_2_2[12]; /*0070h*/
u32 outbound_doorbell_clear; /*00A0h*/ u32 outbound_doorbell_clear; /*00A0h*/
u32 reserved_3[3]; /*00A4h*/ u32 reserved_3[3]; /*00A4h*/
u32 outbound_scratch_pad ; /*00B0h*/ u32 outbound_scratch_pad ; /*00B0h*/
u32 outbound_scratch_pad_2; /*00B4h*/
u32 reserved_4[3]; /*00B4h*/ u32 reserved_4[2]; /*00B8h*/
u32 inbound_low_queue_port ; /*00C0h*/ u32 inbound_low_queue_port ; /*00C0h*/
@ -901,9 +952,9 @@ struct megasas_init_frame {
u32 queue_info_new_phys_addr_hi; /*1Ch */ u32 queue_info_new_phys_addr_hi; /*1Ch */
u32 queue_info_old_phys_addr_lo; /*20h */ u32 queue_info_old_phys_addr_lo; /*20h */
u32 queue_info_old_phys_addr_hi; /*24h */ u32 queue_info_old_phys_addr_hi; /*24h */
u32 driver_ver_lo; /*28h */
u32 reserved_4[6]; /*28h */ u32 driver_ver_hi; /*2Ch */
u32 reserved_4[4]; /*30h */
} __attribute__ ((packed)); } __attribute__ ((packed));
struct megasas_init_queue_info { struct megasas_init_queue_info {
@ -1263,12 +1314,120 @@ struct megasas_evt_detail {
} __attribute__ ((packed)); } __attribute__ ((packed));
#define MAX_PERF_COLLECTION_VD MAX_LOGICAL_DRIVES
#define BLOCKTOMB_BITSHIFT 11
/*
* defines the logical drive performance metrics structure
* These metrics are valid for the current collection period
*/
typedef struct _MR_IO_METRICS_SIZE {
u32 lessThan512B; // Number of IOs: size <= 512B
u32 between512B_4K; // Number of IOs: 512B < size <=4K
u32 between4K_16K; // Number of IOs: 4K < size <=16K
u32 between16K_64K; // Number of IOs: 16K < size <=64K
u32 between64K_256K; // Number of IOs: 64K < size <=256K
u32 moreThan256K; // Number of IOs: 256K < size
} __attribute__ ((packed)) MR_IO_METRICS_SIZE;
/*
* define the structure to capture the randomness of the IOs
* each counter is for IOs, whose LBA is set distance apart from the previous I\
O.
*/
typedef struct _MR_IO_METRICS_RANDOMNESS {
u32 sequential; // Number of IOs: sequential ( inter-LBA distance is 0)
u32 lessThan64K; // Number of IOs: within 64KB of previous IO
u32 between64K_512K; // Number of IOs: 64K < LBA <=512K
u32 between512K_16M; // Number of IOs: 512K < LBA <=16M
u32 between16M_256M; // Number of IOs: 16M < LBA <=256M
u32 between256M_1G; // Number of IOs: 256M < LBA <=1G
u32 moreThan1G; // Number of IOs: 1G < LBA
} __attribute__ ((packed)) MR_IO_METRICS_RANDOMNESS;
/*
* define the structure for LD cache usage
*/
typedef struct _MR_IO_METRICS_LD_CACHE {
u8 targetId;
u8 reserved[7]; // For future use
MR_IO_METRICS_SIZE readSizeCache; // Reads to Primary Cache
MR_IO_METRICS_SIZE writeSizeCache; // Writes to Primary Cache
MR_IO_METRICS_SIZE readSizeSSC; // Reads to Secondary Cache
MR_IO_METRICS_SIZE writeSizeSSC; // Writes to Secondary Cache
} __attribute__ ((packed)) MR_IO_METRICS_LD_CACHE;
/*
* define the structure for controller cache usage
*/
typedef struct _MR_IO_METRICS_CACHE {
u32 size; // size of this data structure (including size field)
u32 collectionPeriod; // Time (sec), taken to collect this data
u32 avgDirtyCache; // Running average of dirty cache (% of cache size)
u32 avgCacheUsed; // Running average of total cache in use
u32 readAheadCache; // Cache(MB) used for Read Ahead data
u32 readAheadSSC; // Secondary Cache(MB) used for Read Ahead data
u32 unusedReadAheadCache; // Cache(MB) for Read Ahead data, that wasn't accessed
u32 unusedReadAheadSSC; // Secondary Cache(MB) for Read Ahead data, that wasn't accessed
u32 flushBlockTime; // Time(ms) IOs were blocked while cache is flushed etc.
u8 reserved[2]; // For future use
u16 count; // count of number of targetId entries in this list
MR_IO_METRICS_LD_CACHE ldIoCache[1]; // Variable list of LD IO metrics
} __attribute__ ((packed)) MR_IO_METRICS_CACHE;
/*
* define the structure for overall LD IO metrics (from host perspective)
*/
typedef struct _MR_IO_METRICS_LD_OVERALL {
u8 targetId;
u8 pad;
u16 idleTime; // Total seconds, LD has been idle
u32 reserved;
u32 readMB; // Total read data transferred in MB
u32 writeMB; // Total write data transferred in MB
MR_IO_METRICS_SIZE readSize; // Aggregagate the number of read IOs for total IO count
MR_IO_METRICS_SIZE writeSize; // Aggregate the number write IOs for write total IO count
MR_IO_METRICS_RANDOMNESS readRandomness;
MR_IO_METRICS_RANDOMNESS writeRandomness;
} __attribute__ ((packed)) MR_IO_METRICS_LD_OVERALL;
typedef struct _MR_IO_METRICS_LD_OVERALL_LIST {
u32 size; // size of this data structure (including size field)
u32 collectionPeriod; // Time (sec), taken to collect this data
MR_IO_METRICS_LD_OVERALL ldIOOverall[1]; // Variable list of overall LD IO metrics
} __attribute__ ((packed)) MR_IO_METRICS_LD_OVERALL_LIST;
/*
* define the structure for controller's IO metrics
*/
typedef struct _MR_IO_METRICS {
MR_IO_METRICS_CACHE ctrlIoCache; // controller cache usage
MR_IO_METRICS_LD_OVERALL_LIST ldIoMetrics; // overall host IO metrics
} __attribute__ ((packed)) MR_IO_METRICS;
typedef struct _PERFORMANCEMETRIC
{
u8 LogOn;
MR_IO_METRICS_LD_OVERALL IoMetricsLD[MAX_PERF_COLLECTION_VD];
MR_IO_METRICS_LD_OVERALL SavedIoMetricsLD[MAX_PERF_COLLECTION_VD];
u64 LastBlock[MAX_LOGICAL_DRIVES];
u64 LastIOTime[MAX_PERF_COLLECTION_VD];
u64 CollectEndTime;
u64 CollectStartTime;
u32 SavedCollectTimeSecs;
}PERFORMANCEMETRIC;
struct megasas_instance { struct megasas_instance {
u32 *producer; u32 *producer;
dma_addr_t producer_h; dma_addr_t producer_h;
u32 *consumer; u32 *consumer;
dma_addr_t consumer_h; dma_addr_t consumer_h;
u32 *verbuf;
dma_addr_t verbuf_h;
u32 *reply_queue; u32 *reply_queue;
dma_addr_t reply_queue_h; dma_addr_t reply_queue_h;
@ -1277,11 +1436,13 @@ struct megasas_instance {
struct megasas_register_set __iomem *reg_set; struct megasas_register_set __iomem *reg_set;
struct megasas_pd_list pd_list[MEGASAS_MAX_PD]; struct megasas_pd_list pd_list[MEGASAS_MAX_PD];
u8 ld_ids[MEGASAS_MAX_LD_IDS];
s8 init_id; s8 init_id;
u16 max_num_sge; u16 max_num_sge;
u16 max_fw_cmds; u16 max_fw_cmds;
// For Fusion its num IOCTL cmds, for others MFI based its max_fw_cmds
u16 max_mfi_cmds;
u32 max_sectors_per_req; u32 max_sectors_per_req;
u32 cmd_per_lun; u32 cmd_per_lun;
@ -1347,6 +1508,18 @@ struct megasas_instance {
struct timer_list io_completion_timer; struct timer_list io_completion_timer;
struct timer_list fw_live_poll_timer; struct timer_list fw_live_poll_timer;
struct list_head internal_reset_pending_q; struct list_head internal_reset_pending_q;
/* Ptr to hba specfic information */
void *ctrl_context;
u8 msi_flag;
struct msix_entry msixentry;
u64 map_id;
struct megasas_cmd *map_update_cmd;
unsigned long bar;
long reset_flags;
PERFORMANCEMETRIC PerformanceMetric;
u32 CurLdCount;
struct mutex reset_mutex;
}; };
enum { enum {
MEGASAS_HBA_OPERATIONAL = 0, MEGASAS_HBA_OPERATIONAL = 0,
@ -1363,11 +1536,18 @@ struct megasas_instance_template {
void (*enable_intr)(struct megasas_register_set __iomem *) ; void (*enable_intr)(struct megasas_register_set __iomem *) ;
void (*disable_intr)(struct megasas_register_set __iomem *); void (*disable_intr)(struct megasas_register_set __iomem *);
int (*clear_intr)(struct megasas_register_set __iomem *); u32 (*clear_intr)(struct megasas_register_set __iomem *);
u32 (*read_fw_status_reg)(struct megasas_register_set __iomem *); u32 (*read_fw_status_reg)(struct megasas_register_set __iomem *);
int (*adp_reset)(struct megasas_instance *, struct megasas_register_set __iomem *); int (*adp_reset)(struct megasas_instance *, struct megasas_register_set __iomem *);
int (*check_reset)(struct megasas_instance *, struct megasas_register_set __iomem *); int (*check_reset)(struct megasas_instance *, struct megasas_register_set __iomem *);
irqreturn_t (*service_isr )(int irq, void *devp, struct pt_regs *regs);
void (*tasklet)(unsigned long);
u32 (*init_adapter)(struct megasas_instance *);
u32 (*build_and_issue_cmd) (struct megasas_instance *, struct scsi_cmnd *);
void (*issue_dcmd) (struct megasas_instance *instance,
struct megasas_cmd *cmd);
}; };
#define MEGASAS_IS_LOGICAL(scp) \ #define MEGASAS_IS_LOGICAL(scp) \
@ -1393,7 +1573,13 @@ struct megasas_cmd {
struct list_head list; struct list_head list;
struct scsi_cmnd *scmd; struct scsi_cmnd *scmd;
struct megasas_instance *instance; struct megasas_instance *instance;
u32 frame_count; union {
struct {
u16 smid;
u16 resvd;
} context;
u32 frame_count;
};
}; };
#define MAX_MGMT_ADAPTERS 1024 #define MAX_MGMT_ADAPTERS 1024

View file

@ -0,0 +1,460 @@
/*
* Linux MegaRAID driver for SAS based RAID controllers
*
* Copyright (c) 2009-2011 LSI Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* FILE: megaraid_sas_fp.c
*
* Authors: LSI Corporation
* Sumant Patro
* Varad Talamacki
* Manoj Jose
*
* Send feedback to: <megaraidlinux@lsi.com>
*
* Mail to: LSI Corporation, 1621 Barber Lane, Milpitas, CA 95035
* ATTN: Linuxraid
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/list.h>
#include <linux/moduleparam.h>
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/smp_lock.h>
#include <linux/uio.h>
#include <asm/uaccess.h>
#include <linux/fs.h>
#include <linux/compat.h>
#include <linux/blkdev.h>
#include <linux/poll.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
#include "megaraid_sas_fusion.h"
#include <asm/div64.h>
#define ABS_DIFF(a,b) ( ((a) > (b)) ? ((a) - (b)) : ((b) - (a)) )
#define MR_LD_STATE_OPTIMAL 3
#define FALSE 0
#define TRUE 1
/* Prototypes */
void
mr_update_load_balance_params(MR_FW_RAID_MAP_ALL *map, PLD_LOAD_BALANCE_INFO lbInfo);
u32 mega_mod64(u64 dividend, u32 divisor)
{
u64 d;
u32 remainder;
if (!divisor)
printk(KERN_ERR "megasas : DIVISOR is zero, in div fn\n");
d = dividend;
remainder = do_div(d, divisor);
return remainder;
}
/**
* @param dividend : Dividend
* @param divisor : Divisor
*
* @return quotient
**/
u64 mega_div64_32(uint64_t dividend, uint32_t divisor)
{
u32 remainder;
u64 d;
if (!divisor)
printk(KERN_ERR "megasas : DIVISOR is zero in mod fn\n");
d = dividend;
remainder = do_div(d, divisor); /* Stores the quotient in d and returns the remainder */
return d;
}
MR_LD_RAID *MR_LdRaidGet(u32 ld, MR_FW_RAID_MAP_ALL *map)
{
return &map->raidMap.ldSpanMap[ld].ldRaid;
}
static MR_SPAN_BLOCK_INFO *MR_LdSpanInfoGet(u32 ld, MR_FW_RAID_MAP_ALL *map)
{
return &map->raidMap.ldSpanMap[ld].spanBlock[0];
}
static u8 MR_LdDataArmGet(u32 ld, u32 armIdx, MR_FW_RAID_MAP_ALL *map)
{
return map->raidMap.ldSpanMap[ld].dataArmMap[armIdx];
}
static u16 MR_ArPdGet(u32 ar, u32 arm, MR_FW_RAID_MAP_ALL *map)
{
return map->raidMap.arMapInfo[ar].pd[arm];
}
static u16 MR_LdSpanArrayGet(u32 ld, u32 span, MR_FW_RAID_MAP_ALL *map)
{
return map->raidMap.ldSpanMap[ld].spanBlock[span].span.arrayRef;
}
static u16 MR_PdDevHandleGet(u32 pd, MR_FW_RAID_MAP_ALL *map)
{
return map->raidMap.devHndlInfo[pd].curDevHdl;
}
u16 MR_GetLDTgtId(u32 ld, MR_FW_RAID_MAP_ALL *map)
{
return map->raidMap.ldSpanMap[ld].ldRaid.targetId;
}
u16 MR_TargetIdToLdGet(u32 ldTgtId, MR_FW_RAID_MAP_ALL *map)
{
return map->raidMap.ldTgtIdToLd[ldTgtId];
}
static MR_LD_SPAN *MR_LdSpanPtrGet(u32 ld, u32 span, MR_FW_RAID_MAP_ALL *map)
{
return &map->raidMap.ldSpanMap[ld].spanBlock[span].span;
}
/*
* This function will validate Map info data provided by FW
*/
u8 MR_ValidateMapInfo(MR_FW_RAID_MAP_ALL *map, PLD_LOAD_BALANCE_INFO lbInfo)
{
MR_FW_RAID_MAP *pFwRaidMap = &map->raidMap;
if (pFwRaidMap->totalSize !=
(sizeof (MR_FW_RAID_MAP) - sizeof(MR_LD_SPAN_MAP) +
(sizeof(MR_LD_SPAN_MAP) * pFwRaidMap->ldCount))) {
printk(KERN_ERR "megasas: map info structure size 0x%lx is not matching with ld count\n",
((sizeof (MR_FW_RAID_MAP) - sizeof(MR_LD_SPAN_MAP)) + (sizeof(MR_LD_SPAN_MAP) * pFwRaidMap->ldCount)));
printk(KERN_ERR "megasas: span map %lx, pFwRaidMap->totalSize : %x\n",sizeof(MR_LD_SPAN_MAP), pFwRaidMap->totalSize);
return 0;
}
mr_update_load_balance_params(map, lbInfo);
return 1;
}
u32 MR_GetSpanBlock(u32 ld, u64 row, u64 *span_blk, MR_FW_RAID_MAP_ALL *map, int *div_error)
{
MR_SPAN_BLOCK_INFO *pSpanBlock = MR_LdSpanInfoGet(ld, map);
MR_QUAD_ELEMENT *quad;
MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
u32 span, j;
for (span=0; span<raid->spanDepth; span++, pSpanBlock++) {
for (j=0; j<pSpanBlock->block_span_info.noElements; j++) {
quad = &pSpanBlock->block_span_info.quad[j];
if (quad->diff == 0) {
*div_error = 1;
return span;
}
if (quad->logStart <= row && row <= quad->logEnd && ( mega_mod64(row-quad->logStart, quad->diff)) == 0) {
if (span_blk != NULL) {
u64 blk, debugBlk;
blk = mega_div64_32((row-quad->logStart), quad->diff);
debugBlk = blk;
blk = (blk + quad->offsetInSpan) << raid->stripeShift;
*span_blk = blk;
}
return span;
}
}
}
return span;
}
/*
******************************************************************************
*
* This routine calculates the arm, span and block for the specified stripe and
* reference in stripe.
*
* Inputs :
*
* ld - Logical drive number
* stripRow - Stripe number
* stripRef - Reference in stripe
*
* Outputs :
*
* span - Span number
* block - Absolute Block number in the physical disk
*/
u8 MR_GetPhyParams(u32 ld, u64 stripRow, u16 stripRef, u64 *pdBlock,
u16 *pDevHandle, RAID_CONTEXT *pRAID_Context,
MR_FW_RAID_MAP_ALL *map)
{
MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
u32 pd, arRef;
u8 physArm, span;
u64 row;
u8 retval = TRUE;
int error_code = 0;
row = mega_div64_32(stripRow, raid->rowDataSize);
if (raid->level == 6) {
u32 logArm = mega_mod64(stripRow, raid->rowDataSize); // logical arm within row
u32 rowMod, armQ, arm;
if (raid->rowSize == 0)
return FALSE;
rowMod = mega_mod64(row, raid->rowSize); // get logical row mod
armQ = raid->rowSize-1-rowMod; // index of Q drive
arm = armQ+1+logArm; // data always logically follows Q
if (arm >= raid->rowSize) // handle wrap condition
arm -= raid->rowSize;
physArm = (u8)arm;
} else {
if (raid->modFactor == 0)
return FALSE;
physArm = MR_LdDataArmGet(ld, mega_mod64(stripRow, raid->modFactor), map);
}
if (raid->spanDepth == 1) {
span = 0;
*pdBlock = row << raid->stripeShift;
} else {
span = (u8)MR_GetSpanBlock(ld, row, pdBlock, map, &error_code);
if (error_code == 1)
return FALSE;
}
arRef = MR_LdSpanArrayGet(ld, span, map); // Get the array on which this span is present.
pd = MR_ArPdGet(arRef, physArm, map); // Get the Pd.
if (pd != MR_PD_INVALID)
*pDevHandle = MR_PdDevHandleGet(pd, map); // Get dev handle from Pd.
else {
*pDevHandle = MR_PD_INVALID; // set dev handle as invalid.
if (raid->level >= 5)
pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE;
else if (raid->level == 1) {
pd = MR_ArPdGet(arRef, physArm + 1, map); // Get Alternate Pd.
if (pd != MR_PD_INVALID)
*pDevHandle = MR_PdDevHandleGet(pd, map); // Get dev handle from Pd.
}
retval = FALSE;
}
*pdBlock += stripRef + MR_LdSpanPtrGet(ld, span, map)->startBlk;
pRAID_Context->spanArm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm;
return retval;
}
typedef u64 REGION_KEY;
typedef u32 REGION_LEN;
/*
******************************************************************************
*
* MR_BuildRaidContext function
*
* This function will initiate command processing. The start/end row and strip
* information is calculated then the lock is acquired.
* This function will return 0 if region lock was acquired OR return num strips ???
*/
u8
MR_BuildRaidContext(struct IO_REQUEST_INFO *io_info,
RAID_CONTEXT *pRAID_Context, MR_FW_RAID_MAP_ALL *map)
{
MR_LD_RAID *raid;
u32 ld, stripSize, stripe_mask;
u64 endLba, endStrip, endRow, start_row, start_strip;
REGION_KEY regStart;
REGION_LEN regSize;
u8 num_strips, numRows;
u16 ref_in_start_stripe, ref_in_end_stripe;
u64 ldStartBlock;
u32 numBlocks, ldTgtId;
u8 isRead;
u8 retval = 0;
ldStartBlock = io_info->ldStartBlock;
numBlocks = io_info->numBlocks;
ldTgtId = io_info->ldTgtId;
isRead = io_info->isRead;
ld = MR_TargetIdToLdGet(ldTgtId, map);
raid = MR_LdRaidGet(ld, map);
stripSize = 1 << raid->stripeShift;
stripe_mask = stripSize-1;
/*
* calculate starting row and stripe, and number of strips and rows
*/
start_strip = ldStartBlock >> raid->stripeShift;
ref_in_start_stripe = (u16)(ldStartBlock & stripe_mask);
endLba = ldStartBlock + numBlocks - 1;
ref_in_end_stripe = (u16)(endLba & stripe_mask);
endStrip = endLba >> raid->stripeShift;
num_strips = (u8)(endStrip - start_strip + 1); // End strip
if (raid->rowDataSize == 0)
return FALSE;
start_row = mega_div64_32(start_strip, raid->rowDataSize); // Start Row
endRow = mega_div64_32(endStrip, raid->rowDataSize);
numRows = (u8)(endRow - start_row + 1); // get the row count
/*
* calculate region info.
*/
regStart = start_row << raid->stripeShift; // assume region is at the start of the first row
regSize = stripSize; // assume this IO needs the full row - we'll adjust if not true
if (num_strips > 1 || // If IO spans more than 1 strip, fp is not possible
(!isRead && raid->level != 0) || // FP is not possible for writes on non-0 raid levels.
!raid->capability.fpCapable) { // FP is not possible if LD is not capable.
io_info->fpOkForIo = FALSE;
} else {
io_info->fpOkForIo = TRUE;
}
if (numRows == 1) {
if (num_strips == 1) { // single-strip IOs can always lock only the data needed
regStart += ref_in_start_stripe;
regSize = numBlocks;
} // multi-strip IOs always need to full stripe locked
} else {
if (start_strip == (start_row + 1) * raid->rowDataSize - 1) { // if the start strip is the last in the start row
regStart += ref_in_start_stripe;
regSize = stripSize - ref_in_start_stripe; // initialize count to sectors from startRef to end of strip
}
if (numRows > 2)
regSize += (numRows-2) << raid->stripeShift; // add complete rows in the middle of the transfer
if (endStrip == endRow*raid->rowDataSize) // if IO ends within first strip of last row
regSize += ref_in_end_stripe+1;
else
regSize += stripSize;
}
pRAID_Context->timeoutValue = map->raidMap.fpPdIoTimeoutSec;
pRAID_Context->regLockFlags = (isRead)? REGION_TYPE_SHARED_READ : raid->regTypeReqOnWrite;
pRAID_Context->VirtualDiskTgtId = raid->targetId;
pRAID_Context->regLockRowLBA = regStart;
pRAID_Context->regLockLength = regSize;
pRAID_Context->configSeqNum = raid->seqNum;
/*Get Phy Params only if FP capable, or else leave it to MR firmware to do the calculation.*/
if (io_info->fpOkForIo) {
retval = MR_GetPhyParams(ld, start_strip, ref_in_start_stripe, &io_info->pdBlock, &io_info->devHandle, pRAID_Context, map);
if (io_info->devHandle == MR_PD_INVALID) // If IO on an invalid Pd, then FP is not possible.
io_info->fpOkForIo = FALSE;
return retval;
} else if (isRead) {
uint stripIdx;
for (stripIdx=0; stripIdx<num_strips; stripIdx++) {
if (!MR_GetPhyParams(ld, start_strip + stripIdx, ref_in_start_stripe, &io_info->pdBlock, &io_info->devHandle, pRAID_Context, map))
return TRUE;
}
}
return TRUE;
}
void
mr_update_load_balance_params(MR_FW_RAID_MAP_ALL *map, PLD_LOAD_BALANCE_INFO lbInfo)
{
int ldCount;
u16 ld;
MR_LD_RAID *raid;
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES; ldCount++)
{
ld = MR_TargetIdToLdGet(ldCount, map);
if (ld >= MAX_LOGICAL_DRIVES) {
lbInfo[ldCount].loadBalanceFlag = 0;
continue;
}
raid = MR_LdRaidGet(ld, map);
/* Two drive Optimal RAID 1 */
if ((raid->level == 1) && (raid->rowSize == 2) && (raid->spanDepth == 1)
&& raid->ldState == MR_LD_STATE_OPTIMAL) {
u32 pd, arRef;
lbInfo[ldCount].loadBalanceFlag = 1;
arRef = MR_LdSpanArrayGet(ld, 0, map); // Get the array on which this span is present.
pd = MR_ArPdGet(arRef, 0, map); // Get the Pd.
lbInfo[ldCount].raid1DevHandle[0] = MR_PdDevHandleGet(pd, map); // Get dev handle from Pd.
pd = MR_ArPdGet(arRef, 1, map); // Get the Pd.
lbInfo[ldCount].raid1DevHandle[1] = MR_PdDevHandleGet(pd, map); // Get dev handle from Pd.
} else
lbInfo[ldCount].loadBalanceFlag = 0;
}
}
u8 megasas_get_best_arm(PLD_LOAD_BALANCE_INFO lbInfo, u8 arm, u64 block, u32 count)
{
u16 pend0, pend1;
u64 diff0, diff1;
u8 bestArm;
/* get the pending cmds for the data and mirror arms */
pend0 = atomic_read(&lbInfo->scsi_pending_cmds[0]);
pend1 = atomic_read(&lbInfo->scsi_pending_cmds[1]);
/* Determine the disk whose head is nearer to the req. block */
diff0 = ABS_DIFF(block, lbInfo->last_accessed_block[0]);
diff1 = ABS_DIFF(block, lbInfo->last_accessed_block[1]);
bestArm = (diff0 <= diff1 ? 0 : 1);
if ((bestArm == arm && pend0 > pend1 + 16) || (bestArm != arm && pend1 > pend0 + 16))
bestArm ^= 1;
/* Update the last accessed block on the correct pd */
lbInfo->last_accessed_block[bestArm] = block + count - 1;
return bestArm;
}
u16 get_updated_dev_handle(PLD_LOAD_BALANCE_INFO lbInfo, struct IO_REQUEST_INFO *io_info)
{
u8 arm, old_arm;
u16 devHandle;
old_arm = lbInfo->raid1DevHandle[0] == io_info->devHandle ? 0 : 1;
/* get best new arm */
arm = megasas_get_best_arm(lbInfo, old_arm, io_info->ldStartBlock, io_info->numBlocks);
devHandle = lbInfo->raid1DevHandle[arm];
atomic_inc(&lbInfo->scsi_pending_cmds[arm]);
return devHandle;
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,782 @@
/*
* Linux MegaRAID driver for SAS based RAID controllers
*
* Copyright (c) 2009-2011 LSI Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* FILE: megaraid_sas_fusion.h
*
* Authors: LSI Corporation
* Manoj Jose
* Sumant Patro
*
* Send feedback to: <megaraidlinux@lsi.com>
*
* Mail to: LSI Corporation, 1621 Barber Lane, Milpitas, CA 95035
* ATTN: Linuxraid
*/
#ifndef _MEGARAID_SAS_FUSION_H_
#define _MEGARAID_SAS_FUSION_H_
#define MEGASAS_MAX_SZ_CHAIN_FRAME 1024
#define MFI_FUSION_ENABLE_INTERRUPT_MASK (0x00000009)
#define MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE 256
#define MEGASAS_MPI2_FUNCTION_PASSTHRU_IO_REQUEST 0xF0
#define MEGASAS_MPI2_FUNCTION_LD_IO_REQUEST 0xF1
#define MEGASAS_LOAD_BALANCE_FLAG 0x1
#define MEGASAS_DCMD_MBOX_PEND_FLAG 0x1
#define HOST_DIAG_OFFSET_FUSION 0x8
#define HOST_DIAG_WRITE_ENABLE 0x80
#define HOST_DIAG_RESET_ADAPTER 0x4
#define MEGASAS_FUSION_MAX_RESET_TRIES 3
/* T10 PI defines */
#define MR_PROT_INFO_TYPE_CONTROLLER 0x8
#define MEGASAS_SCSI_VARIABLE_LENGTH_CMD 0x7f
#define MEGASAS_SCSI_SERVICE_ACTION_READ32 0x9
#define MEGASAS_SCSI_SERVICE_ACTION_WRITE32 0xB
#define MEGASAS_SCSI_ADDL_CDB_LEN 0x18
#define MEGASAS_RD_WR_PROTECT_CHECK_ALL 0x20
#define MEGASAS_RD_WR_PROTECT_CHECK_NONE 0x60
#define MEGASAS_EEDPBLOCKSIZE 512
/*
* Raid context flags
*/
#define MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT 0x4
#define MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_MASK 0x30
typedef enum MR_RAID_FLAGS_IO_SUB_TYPE {
MR_RAID_FLAGS_IO_SUB_TYPE_NONE = 0,
MR_RAID_FLAGS_IO_SUB_TYPE_SYSTEM_PD = 1,
} MR_RAID_FLAGS_IO_SUB_TYPE;
/*
* Request descriptor types
*/
#define MEGASAS_REQ_DESCRIPT_FLAGS_LD_IO 0x7
#define MEGASAS_REQ_DESCRIPT_FLAGS_MFA 0x1
#define MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT 1
#define MEGASAS_FP_CMD_LEN 16
#define MEGASAS_FUSION_IN_RESET 0
/*
* Raid Context structure which describes MegaRAID specific IO Paramenters
* This resides at offset 0x60 where the SGL normally starts in MPT IO Frames
*/
typedef struct _RAID_CONTEXT {
u16 resvd0; // 0x00 -0x01
u16 timeoutValue; // 0x02 -0x03
u8 regLockFlags; // 0x04
u8 resvd1; // 0x05
u16 VirtualDiskTgtId; // 0x06 -0x07
u64 regLockRowLBA; // 0x08 - 0x0F
u32 regLockLength; // 0x10 - 0x13
u16 nextLMId; // 0x14 - 0x15
u8 exStatus; // 0x16
u8 status; // 0x17 status
u8 RAIDFlags; // 0x18 resvd[7:6], ioSubType[5:4], resvd[3:1], preferredCpu[0] */
u8 numSGE; // 0x19 numSge; not including chain entries */
u16 configSeqNum; // 0x1A -0x1B
u8 spanArm; // 0x1C span[7:5], arm[4:0] */
u8 resvd2[3]; // 0x1D-0x1f */
} RAID_CONTEXT;
#define RAID_CTX_SPANARM_ARM_SHIFT (0)
#define RAID_CTX_SPANARM_ARM_MASK (0x1f)
#define RAID_CTX_SPANARM_SPAN_SHIFT (5)
#define RAID_CTX_SPANARM_SPAN_MASK (0xE0)
/*
* define region lock types
*/
typedef enum _REGION_TYPE {
REGION_TYPE_UNUSED = 0, // lock is currently not active
REGION_TYPE_SHARED_READ = 1, // shared lock (for reads)
REGION_TYPE_SHARED_WRITE = 2,
REGION_TYPE_EXCLUSIVE = 3, // exclusive lock (for writes)
} REGION_TYPE;
/* MPI2 defines */
#define MPI2_FUNCTION_IOC_INIT (0x02) /* IOC Init */
#define MPI2_WHOINIT_HOST_DRIVER (0x04)
#define MPI2_VERSION_MAJOR (0x02)
#define MPI2_VERSION_MINOR (0x00)
#define MPI2_VERSION_MAJOR_MASK (0xFF00)
#define MPI2_VERSION_MAJOR_SHIFT (8)
#define MPI2_VERSION_MINOR_MASK (0x00FF)
#define MPI2_VERSION_MINOR_SHIFT (0)
#define MPI2_VERSION ((MPI2_VERSION_MAJOR << MPI2_VERSION_MAJOR_SHIFT) | \
MPI2_VERSION_MINOR)
#define MPI2_HEADER_VERSION_UNIT (0x10)
#define MPI2_HEADER_VERSION_DEV (0x00)
#define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00)
#define MPI2_HEADER_VERSION_UNIT_SHIFT (8)
#define MPI2_HEADER_VERSION_DEV_MASK (0x00FF)
#define MPI2_HEADER_VERSION_DEV_SHIFT (0)
#define MPI2_HEADER_VERSION ((MPI2_HEADER_VERSION_UNIT << 8) | MPI2_HEADER_VERSION_DEV)
#define MPI2_IEEE_SGE_FLAGS_IOCPLBNTA_ADDR (0x03)
#define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_REFTAG (0x8000)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REFTAG (0x0400)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REMOVE_OP (0x0003)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_APPTAG (0x0200)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_GUARD (0x0100)
#define MPI2_SCSIIO_EEDPFLAGS_INSERT_OP (0x0004)
#define MPI2_FUNCTION_SCSI_IO_REQUEST (0x00) /* SCSI IO */
#define MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY (0x06)
#define MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO (0x00)
#define MPI2_SGE_FLAGS_64_BIT_ADDRESSING (0x02)
#define MPI2_SCSIIO_CONTROL_WRITE (0x01000000)
#define MPI2_SCSIIO_CONTROL_READ (0x02000000)
#define MPI2_REQ_DESCRIPT_FLAGS_TYPE_MASK (0x0E)
#define MPI2_RPY_DESCRIPT_FLAGS_UNUSED (0x0F)
#define MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS (0x00)
#define MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK (0x0F)
#define MPI2_WRSEQ_FLUSH_KEY_VALUE (0x0)
#define MPI2_WRITE_SEQUENCE_OFFSET (0x00000004)
#define MPI2_WRSEQ_1ST_KEY_VALUE (0xF)
#define MPI2_WRSEQ_2ND_KEY_VALUE (0x4)
#define MPI2_WRSEQ_3RD_KEY_VALUE (0xB)
#define MPI2_WRSEQ_4TH_KEY_VALUE (0x2)
#define MPI2_WRSEQ_5TH_KEY_VALUE (0x7)
#define MPI2_WRSEQ_6TH_KEY_VALUE (0xD)
#ifndef MPI2_POINTER
#define MPI2_POINTER *
#endif
typedef struct _MPI25_IEEE_SGE_CHAIN64
{
u64 Address;
u32 Length;
u16 Reserved1;
u8 NextChainOffset;
u8 Flags;
} MPI25_IEEE_SGE_CHAIN64, MPI2_POINTER PTR_MPI25_IEEE_SGE_CHAIN64,
Mpi25IeeeSgeChain64_t, MPI2_POINTER pMpi25IeeeSgeChain64_t;
typedef struct _MPI2_SGE_SIMPLE_UNION
{
u32 FlagsLength;
union
{
u32 Address32;
u64 Address64;
} u;
} MPI2_SGE_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_SGE_SIMPLE_UNION,
Mpi2SGESimpleUnion_t, MPI2_POINTER pMpi2SGESimpleUnion_t;
typedef struct
{
u8 CDB[20]; /* 0x00 */
u32 PrimaryReferenceTag; /* 0x14 */
u16 PrimaryApplicationTag; /* 0x18 */
u16 PrimaryApplicationTagMask; /* 0x1A */
u32 TransferLength; /* 0x1C */
} MPI2_SCSI_IO_CDB_EEDP32, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_EEDP32,
Mpi2ScsiIoCdbEedp32_t, MPI2_POINTER pMpi2ScsiIoCdbEedp32_t;
typedef struct _MPI2_SGE_CHAIN_UNION
{
u16 Length;
u8 NextChainOffset;
u8 Flags;
union
{
u32 Address32;
u64 Address64;
} u;
} MPI2_SGE_CHAIN_UNION, MPI2_POINTER PTR_MPI2_SGE_CHAIN_UNION,
Mpi2SGEChainUnion_t, MPI2_POINTER pMpi2SGEChainUnion_t;
typedef struct _MPI2_IEEE_SGE_SIMPLE32
{
u32 Address;
u32 FlagsLength;
} MPI2_IEEE_SGE_SIMPLE32, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE32,
Mpi2IeeeSgeSimple32_t, MPI2_POINTER pMpi2IeeeSgeSimple32_t;
typedef struct _MPI2_IEEE_SGE_SIMPLE64
{
u64 Address;
u32 Length;
u16 Reserved1;
u8 Reserved2;
u8 Flags;
} MPI2_IEEE_SGE_SIMPLE64, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE64,
Mpi2IeeeSgeSimple64_t, MPI2_POINTER pMpi2IeeeSgeSimple64_t;
typedef union _MPI2_IEEE_SGE_SIMPLE_UNION
{
MPI2_IEEE_SGE_SIMPLE32 Simple32;
MPI2_IEEE_SGE_SIMPLE64 Simple64;
} MPI2_IEEE_SGE_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE_UNION,
Mpi2IeeeSgeSimpleUnion_t, MPI2_POINTER pMpi2IeeeSgeSimpleUnion_t;
typedef MPI2_IEEE_SGE_SIMPLE32 MPI2_IEEE_SGE_CHAIN32;
typedef MPI2_IEEE_SGE_SIMPLE64 MPI2_IEEE_SGE_CHAIN64;
typedef union _MPI2_IEEE_SGE_CHAIN_UNION
{
MPI2_IEEE_SGE_CHAIN32 Chain32;
MPI2_IEEE_SGE_CHAIN64 Chain64;
} MPI2_IEEE_SGE_CHAIN_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_CHAIN_UNION,
Mpi2IeeeSgeChainUnion_t, MPI2_POINTER pMpi2IeeeSgeChainUnion_t;
typedef union _MPI2_SGE_IO_UNION
{
MPI2_SGE_SIMPLE_UNION MpiSimple;
MPI2_SGE_CHAIN_UNION MpiChain;
MPI2_IEEE_SGE_SIMPLE_UNION IeeeSimple;
MPI2_IEEE_SGE_CHAIN_UNION IeeeChain;
} MPI2_SGE_IO_UNION, MPI2_POINTER PTR_MPI2_SGE_IO_UNION,
Mpi2SGEIOUnion_t, MPI2_POINTER pMpi2SGEIOUnion_t;
typedef union
{
u8 CDB32[32];
MPI2_SCSI_IO_CDB_EEDP32 EEDP32;
MPI2_SGE_SIMPLE_UNION SGE;
} MPI2_SCSI_IO_CDB_UNION, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_UNION,
Mpi2ScsiIoCdb_t, MPI2_POINTER pMpi2ScsiIoCdb_t;
/*
* RAID SCSI IO Request Message
* Total SGE count will be one less than _MPI2_SCSI_IO_REQUEST
*/
typedef struct _MPI2_RAID_SCSI_IO_REQUEST
{
u16 DevHandle; /* 0x00 */
u8 ChainOffset; /* 0x02 */
u8 Function; /* 0x03 */
u16 Reserved1; /* 0x04 */
u8 Reserved2; /* 0x06 */
u8 MsgFlags; /* 0x07 */
u8 VP_ID; /* 0x08 */
u8 VF_ID; /* 0x09 */
u16 Reserved3; /* 0x0A */
u32 SenseBufferLowAddress; /* 0x0C */
u16 SGLFlags; /* 0x10 */
u8 SenseBufferLength; /* 0x12 */
u8 Reserved4; /* 0x13 */
u8 SGLOffset0; /* 0x14 */
u8 SGLOffset1; /* 0x15 */
u8 SGLOffset2; /* 0x16 */
u8 SGLOffset3; /* 0x17 */
u32 SkipCount; /* 0x18 */
u32 DataLength; /* 0x1C */
u32 BidirectionalDataLength; /* 0x20 */
u16 IoFlags; /* 0x24 */
u16 EEDPFlags; /* 0x26 */
u32 EEDPBlockSize; /* 0x28 */
u32 SecondaryReferenceTag; /* 0x2C */
u16 SecondaryApplicationTag; /* 0x30 */
u16 ApplicationTagTranslationMask; /* 0x32 */
u8 LUN[8]; /* 0x34 */
u32 Control; /* 0x3C */
MPI2_SCSI_IO_CDB_UNION CDB; /* 0x40 */
RAID_CONTEXT RaidContext; /* 0x60 */
MPI2_SGE_IO_UNION SGL; /* 0x80 */
} MEGASAS_RAID_SCSI_IO_REQUEST, MPI2_POINTER PTR_MEGASAS_RAID_SCSI_IO_REQUEST,
MEGASASRaidSCSIIORequest_t, MPI2_POINTER pMEGASASRaidSCSIIORequest_t;
/*
* MPT RAID MFA IO Descriptor.
*/
typedef struct _MEGASAS_RAID_MFA_IO_DESCRIPTOR {
u32 RequestFlags : 8;
u32 MessageAddress1 : 24; /* bits 31:8*/
u32 MessageAddress2; /* bits 61:32 */
} MEGASAS_RAID_MFA_IO_REQUEST_DESCRIPTOR,*PMEGASAS_RAID_MFA_IO_REQUEST_DESCRIPTOR;
/* Default Request Descriptor */
typedef struct _MPI2_DEFAULT_REQUEST_DESCRIPTOR
{
u8 RequestFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u16 LMID; /* 0x04 */
u16 DescriptorTypeDependent; /* 0x06 */
} MPI2_DEFAULT_REQUEST_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_DEFAULT_REQUEST_DESCRIPTOR,
Mpi2DefaultRequestDescriptor_t, MPI2_POINTER pMpi2DefaultRequestDescriptor_t;
/* High Priority Request Descriptor */
typedef struct _MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR
{
u8 RequestFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u16 LMID; /* 0x04 */
u16 Reserved1; /* 0x06 */
} MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR,
Mpi2HighPriorityRequestDescriptor_t,
MPI2_POINTER pMpi2HighPriorityRequestDescriptor_t;
/* SCSI IO Request Descriptor */
typedef struct _MPI2_SCSI_IO_REQUEST_DESCRIPTOR
{
u8 RequestFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u16 LMID; /* 0x04 */
u16 DevHandle; /* 0x06 */
} MPI2_SCSI_IO_REQUEST_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_SCSI_IO_REQUEST_DESCRIPTOR,
Mpi2SCSIIORequestDescriptor_t, MPI2_POINTER pMpi2SCSIIORequestDescriptor_t;
/* SCSI Target Request Descriptor */
typedef struct _MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR
{
u8 RequestFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u16 LMID; /* 0x04 */
u16 IoIndex; /* 0x06 */
} MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR,
Mpi2SCSITargetRequestDescriptor_t,
MPI2_POINTER pMpi2SCSITargetRequestDescriptor_t;
/* RAID Accelerator Request Descriptor */
typedef struct _MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR
{
u8 RequestFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u16 LMID; /* 0x04 */
u16 Reserved; /* 0x06 */
} MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR,
Mpi2RAIDAcceleratorRequestDescriptor_t,
MPI2_POINTER pMpi2RAIDAcceleratorRequestDescriptor_t;
/* union of Request Descriptors */
typedef union _MEGASAS_REQUEST_DESCRIPTOR_UNION
{
MPI2_DEFAULT_REQUEST_DESCRIPTOR Default;
MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR HighPriority;
MPI2_SCSI_IO_REQUEST_DESCRIPTOR SCSIIO;
MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR SCSITarget;
MPI2_RAID_ACCEL_REQUEST_DESCRIPTOR RAIDAccelerator;
MEGASAS_RAID_MFA_IO_REQUEST_DESCRIPTOR MFAIo;
union {
struct {
u32 low;
u32 high;
} u;
u64 Words;
};
} MEGASAS_REQUEST_DESCRIPTOR_UNION;
/* Default Reply Descriptor */
typedef struct _MPI2_DEFAULT_REPLY_DESCRIPTOR
{
u8 ReplyFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 DescriptorTypeDependent1; /* 0x02 */
u32 DescriptorTypeDependent2; /* 0x04 */
} MPI2_DEFAULT_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_DEFAULT_REPLY_DESCRIPTOR,
Mpi2DefaultReplyDescriptor_t, MPI2_POINTER pMpi2DefaultReplyDescriptor_t;
/* Address Reply Descriptor */
typedef struct _MPI2_ADDRESS_REPLY_DESCRIPTOR
{
u8 ReplyFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u32 ReplyFrameAddress; /* 0x04 */
} MPI2_ADDRESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_ADDRESS_REPLY_DESCRIPTOR,
Mpi2AddressReplyDescriptor_t, MPI2_POINTER pMpi2AddressReplyDescriptor_t;
/* SCSI IO Success Reply Descriptor */
typedef struct _MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR
{
u8 ReplyFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u16 TaskTag; /* 0x04 */
u16 Reserved1; /* 0x06 */
} MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR,
Mpi2SCSIIOSuccessReplyDescriptor_t,
MPI2_POINTER pMpi2SCSIIOSuccessReplyDescriptor_t;
/* TargetAssist Success Reply Descriptor */
typedef struct _MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR
{
u8 ReplyFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u8 SequenceNumber; /* 0x04 */
u8 Reserved1; /* 0x05 */
u16 IoIndex; /* 0x06 */
} MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR,
Mpi2TargetAssistSuccessReplyDescriptor_t,
MPI2_POINTER pMpi2TargetAssistSuccessReplyDescriptor_t;
/* Target Command Buffer Reply Descriptor */
typedef struct _MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR
{
u8 ReplyFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u8 VP_ID; /* 0x02 */
u8 Flags; /* 0x03 */
u16 InitiatorDevHandle; /* 0x04 */
u16 IoIndex; /* 0x06 */
} MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR,
Mpi2TargetCommandBufferReplyDescriptor_t,
MPI2_POINTER pMpi2TargetCommandBufferReplyDescriptor_t;
/* RAID Accelerator Success Reply Descriptor */
typedef struct _MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR
{
u8 ReplyFlags; /* 0x00 */
u8 MSIxIndex; /* 0x01 */
u16 SMID; /* 0x02 */
u32 Reserved; /* 0x04 */
} MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR,
MPI2_POINTER PTR_MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR,
Mpi2RAIDAcceleratorSuccessReplyDescriptor_t,
MPI2_POINTER pMpi2RAIDAcceleratorSuccessReplyDescriptor_t;
/* union of Reply Descriptors */
typedef union _MPI2_REPLY_DESCRIPTORS_UNION
{
MPI2_DEFAULT_REPLY_DESCRIPTOR Default;
MPI2_ADDRESS_REPLY_DESCRIPTOR AddressReply;
MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR SCSIIOSuccess;
MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR TargetAssistSuccess;
MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR TargetCommandBuffer;
MPI2_RAID_ACCELERATOR_SUCCESS_REPLY_DESCRIPTOR RAIDAcceleratorSuccess;
u64 Words;
} MPI2_REPLY_DESCRIPTORS_UNION, MPI2_POINTER PTR_MPI2_REPLY_DESCRIPTORS_UNION,
Mpi2ReplyDescriptorsUnion_t, MPI2_POINTER pMpi2ReplyDescriptorsUnion_t;
/* IOCInit Request message */
typedef struct _MPI2_IOC_INIT_REQUEST
{
u8 WhoInit; /* 0x00 */
u8 Reserved1; /* 0x01 */
u8 ChainOffset; /* 0x02 */
u8 Function; /* 0x03 */
u16 Reserved2; /* 0x04 */
u8 Reserved3; /* 0x06 */
u8 MsgFlags; /* 0x07 */
u8 VP_ID; /* 0x08 */
u8 VF_ID; /* 0x09 */
u16 Reserved4; /* 0x0A */
u16 MsgVersion; /* 0x0C */
u16 HeaderVersion; /* 0x0E */
u32 Reserved5; /* 0x10 */
u16 Reserved6; /* 0x14 */
u8 Reserved7; /* 0x16 */
u8 HostMSIxVectors; /* 0x17 */
u16 Reserved8; /* 0x18 */
u16 SystemRequestFrameSize; /* 0x1A */
u16 ReplyDescriptorPostQueueDepth; /* 0x1C */
u16 ReplyFreeQueueDepth; /* 0x1E */
u32 SenseBufferAddressHigh; /* 0x20 */
u32 SystemReplyAddressHigh; /* 0x24 */
u64 SystemRequestFrameBaseAddress; /* 0x28 */
u64 ReplyDescriptorPostQueueAddress;/* 0x30 */
u64 ReplyFreeQueueAddress; /* 0x38 */
u64 TimeStamp; /* 0x40 */
} MPI2_IOC_INIT_REQUEST, MPI2_POINTER PTR_MPI2_IOC_INIT_REQUEST,
Mpi2IOCInitRequest_t, MPI2_POINTER pMpi2IOCInitRequest_t;
/* mrpriv defines */
#define MR_PD_INVALID 0xFFFF
#define MAX_SPAN_DEPTH 8
#define MAX_RAIDMAP_SPAN_DEPTH (MAX_SPAN_DEPTH)
#define MAX_ROW_SIZE 32
#define MAX_RAIDMAP_ROW_SIZE (MAX_ROW_SIZE)
#define MAX_LOGICAL_DRIVES 64
#define MAX_RAIDMAP_LOGICAL_DRIVES (MAX_LOGICAL_DRIVES)
#define MAX_RAIDMAP_VIEWS (MAX_LOGICAL_DRIVES)
#define MAX_ARRAYS 128
#define MAX_RAIDMAP_ARRAYS (MAX_ARRAYS)
#define MAX_PHYSICAL_DEVICES 256
#define MAX_RAIDMAP_PHYSICAL_DEVICES (MAX_PHYSICAL_DEVICES)
#define MR_DCMD_LD_MAP_GET_INFO 0x0300e101 // get the mapping information of this LD
typedef enum _MR_CTRL_IO_METRICS_CMD_TYPE {
MR_CTRL_IO_METRICS_CMD_START = 0, // Start (or restart) Full IO metrics collection
MR_CTRL_IO_METRICS_CMD_STOP = 1, // Stop IO metrics collection
MR_CTRL_IO_METRICS_CMD_START_BASIC = 2, // Start collection of only basic IO metrics
MR_CTRL_IO_METRICS_CMD_SEND_IDLE_FP = 3, // Send bitmap of LDs that are idle with respect to FP
} MR_CTRL_IO_METRICS_CMD_TYPE;
typedef struct _MR_DEV_HANDLE_INFO {
u16 curDevHdl; // the device handle currently used by fw to issue the command.
u8 validHandles; // bitmap of valid device handles.
u8 reserved;
u16 devHandle[2]; // 0x04 dev handles for all the paths.
} MR_DEV_HANDLE_INFO; // 0x08, Total Size
typedef struct _MR_ARRAY_INFO {
u16 pd[MAX_RAIDMAP_ROW_SIZE];
} MR_ARRAY_INFO; // 0x40, Total Size
typedef struct _MR_QUAD_ELEMENT {
u64 logStart; // 0x00
u64 logEnd; // 0x08
u64 offsetInSpan; // 0x10
u32 diff; // 0x18
u32 reserved1; // 0x1C
} MR_QUAD_ELEMENT; // 0x20, Total size
typedef struct _MR_SPAN_INFO {
u32 noElements; // 0x00
u32 reserved1; // 0x04
MR_QUAD_ELEMENT quad[MAX_RAIDMAP_SPAN_DEPTH]; // 0x08
} MR_SPAN_INFO; // 0x108, Total size
typedef struct _MR_LD_SPAN_ { // SPAN structure
u64 startBlk; // 0x00, starting block number in array
u64 numBlks; // 0x08, number of blocks
u16 arrayRef; // 0x10, array reference
u8 reserved[6]; // 0x12
} MR_LD_SPAN; // 0x18, Total Size
typedef struct _MR_SPAN_BLOCK_INFO {
u64 num_rows; // number of rows/span
MR_LD_SPAN span; // 0x08
MR_SPAN_INFO block_span_info; // 0x20
} MR_SPAN_BLOCK_INFO; // 0x128, Total Size
typedef struct _MR_LD_RAID {
struct {
u32 fpCapable :1;
u32 reserved5 :3;
u32 ldPiMode :4;
u32 pdPiMode :4; // Every Pd has to be same.
u32 encryptionType :8; // FDE or controller encryption (MR_LD_ENCRYPTION_TYPE)
u32 fpWriteCapable :1;
u32 fpReadCapable :1;
u32 fpWriteAcrossStripe :1;
u32 fpReadAcrossStripe :1;
u32 reserved4 :8;
} capability; // 0x00
u32 reserved6;
u64 size; // 0x08, LD size in blocks
u8 spanDepth; // 0x10, Total Number of Spans
u8 level; // 0x11, RAID level
u8 stripeShift; // 0x12, shift-count to get stripe size (0=512, 1=1K, 7=64K, etc.)
u8 rowSize; // 0x13, number of disks in a row
u8 rowDataSize; // 0x14, number of data disks in a row
u8 writeMode; // 0x15, WRITE_THROUGH or WRITE_BACK
u8 PRL; // 0x16, To differentiate between RAID1 and RAID1E
u8 SRL; // 0x17
u16 targetId; // 0x18, ld Target Id.
u8 ldState; // 0x1a, state of ld, state corresponds to MR_LD_STATE
u8 regTypeReqOnWrite; // 0x1b, Pre calculate region type requests based on MFC etc..
u8 modFactor; // 0x1c, same as rowSize,
u8 reserved2[1]; // 0x1d
u16 seqNum; // 0x1e, LD sequence number
struct {
u32 ldSyncRequired:1; // This LD requires sync command before completing
u32 reserved:31;
} flags; // 0x20
u8 reserved3[0x5C]; // 0x24
} MR_LD_RAID; // 0x80, Total Size
typedef struct _MR_LD_SPAN_MAP {
MR_LD_RAID ldRaid; // 0x00
u8 dataArmMap[MAX_RAIDMAP_ROW_SIZE]; // 0x80, needed for GET_ARM() - R0/1/5 only.
MR_SPAN_BLOCK_INFO spanBlock[MAX_RAIDMAP_SPAN_DEPTH]; // 0xA0
} MR_LD_SPAN_MAP; // 0x9E0
typedef struct _MR_FW_RAID_MAP {
u32 totalSize; // total size of this structure, including this field.
union {
struct { // Simple method of version checking variables
u32 maxLd;
u32 maxSpanDepth;
u32 maxRowSize;
u32 maxPdCount;
u32 maxArrays;
} validationInfo;
u32 version[5];
u32 reserved1[5];
};
u32 ldCount; // count of lds.
u32 Reserved1; //
u8 ldTgtIdToLd[MAX_RAIDMAP_LOGICAL_DRIVES+MAX_RAIDMAP_VIEWS]; // 0x20. This doesn't correspond to
// FW Ld Tgt Id to LD, but will purge. For example: if tgt Id is 4
// and FW LD is 2, and there is only one LD, FW will populate the
// array like this. [0xFF, 0xFF, 0xFF, 0xFF, 0x0,.....]. This is to
// help reduce the entire strcture size if there are few LDs or
// driver is looking info for 1 LD only.
u8 fpPdIoTimeoutSec; // timeout value used by driver in FP IOs
u8 reserved2[7];
MR_ARRAY_INFO arMapInfo[MAX_RAIDMAP_ARRAYS]; // 0x00a8
MR_DEV_HANDLE_INFO devHndlInfo[MAX_RAIDMAP_PHYSICAL_DEVICES]; // 0x20a8
MR_LD_SPAN_MAP ldSpanMap[1]; // 0x28a8 - [0 - MAX_RAIDMAP_LOGICAL_DRIVES + MAX_RAIDMAP_VIEWS + 1];
} MR_FW_RAID_MAP; // 0x3288, Total Size
struct IO_REQUEST_INFO {
u64 ldStartBlock;
u32 numBlocks;
u16 ldTgtId;
u8 isRead;
u16 devHandle;
u64 pdBlock;
u8 fpOkForIo;
};
typedef struct _MR_LD_TARGET_SYNC {
u8 targetId;
u8 reserved;
u16 seqNum;
} MR_LD_TARGET_SYNC;
#define IEEE_SGE_FLAGS_ADDR_MASK (0x03)
#define IEEE_SGE_FLAGS_SYSTEM_ADDR (0x00)
#define IEEE_SGE_FLAGS_IOCDDR_ADDR (0x01)
#define IEEE_SGE_FLAGS_IOCPLB_ADDR (0x02)
#define IEEE_SGE_FLAGS_IOCPLBNTA_ADDR (0x03)
#define IEEE_SGE_FLAGS_CHAIN_ELEMENT (0x80)
#define IEEE_SGE_FLAGS_END_OF_LIST (0x40)
struct megasas_register_set;
struct megasas_instance;
union desc_word {
u64 word;
struct {
u32 low;
u32 high;
} u;
};
struct megasas_cmd_fusion {
MEGASAS_RAID_SCSI_IO_REQUEST *io_request;
dma_addr_t io_request_phys_addr;
MPI2_SGE_IO_UNION *sg_frame;
dma_addr_t sg_frame_phys_addr;
u8 *sense;
dma_addr_t sense_phys_addr;
struct list_head list;
struct scsi_cmnd *scmd;
struct megasas_instance *instance;
u8 retry_for_fw_reset;
MEGASAS_REQUEST_DESCRIPTOR_UNION *request_desc;
/*
* Context for a MFI frame.
* Used to get the mfi cmd from list when a MFI cmd is completed
*/
u32 sync_cmd_idx;
u32 index;
u8 flags;
};
typedef struct _LD_LOAD_BALANCE_INFO
{
u8 loadBalanceFlag;
u8 reserved1;
u16 raid1DevHandle[2];
atomic_t scsi_pending_cmds[2];
u64 last_accessed_block[2];
} LD_LOAD_BALANCE_INFO, *PLD_LOAD_BALANCE_INFO;
typedef struct _MR_FW_RAID_MAP_ALL {
MR_FW_RAID_MAP raidMap;
MR_LD_SPAN_MAP ldSpanMap[MAX_LOGICAL_DRIVES - 1];
} __attribute__ ((packed)) MR_FW_RAID_MAP_ALL;
struct fusion_context
{
struct megasas_cmd_fusion **cmd_list;
struct list_head cmd_pool;
spinlock_t cmd_pool_lock;
dma_addr_t req_frames_desc_phys;
u8 *req_frames_desc;
struct dma_pool *io_request_frames_pool;
dma_addr_t io_request_frames_phys;
u8 *io_request_frames;
struct dma_pool *sg_dma_pool;
struct dma_pool *sense_dma_pool;
dma_addr_t reply_frames_desc_phys;
Mpi2ReplyDescriptorsUnion_t *reply_frames_desc;
struct dma_pool *reply_frames_desc_pool;
u16 last_reply_idx;
u32 reply_q_depth;
u32 request_alloc_sz;
u32 reply_alloc_sz;
u32 io_frames_alloc_sz;
u16 max_sge_in_main_msg;
u16 max_sge_in_chain;
u8 chain_offset_io_request;
u8 chain_offset_mfi_pthru;
MR_FW_RAID_MAP_ALL *ld_map[2];
dma_addr_t ld_map_phys[2];
u32 map_sz;
u8 fast_path_io;
LD_LOAD_BALANCE_INFO load_balance_info[MAX_LOGICAL_DRIVES];
};
union desc_value {
u64 word;
struct {
u32 low;
u32 high;
} u;
};
#endif //_MEGARAID_SAS_FUSION_H_

View file

@ -75,7 +75,7 @@
#define MPT2SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>" #define MPT2SAS_AUTHOR "LSI Corporation <DL-MPTFusionLinux@lsi.com>"
#define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver" #define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver"
#if defined(__VMKLNX__) #if defined(__VMKLNX__)
#define MPT2SAS_DRIVER_VERSION "06.00.00.00.5vmw" #define MPT2SAS_DRIVER_VERSION "06.00.00.00.6vmw"
#else #else
#define MPT2SAS_DRIVER_VERSION "06.00.00.00" #define MPT2SAS_DRIVER_VERSION "06.00.00.00"
#endif #endif

View file

@ -5629,10 +5629,27 @@ _scsih_io_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
smid); smid);
u32 sz = min_t(u32, SCSI_SENSE_BUFFERSIZE, u32 sz = min_t(u32, SCSI_SENSE_BUFFERSIZE,
le32_to_cpu(mpi_reply->SenseCount)); le32_to_cpu(mpi_reply->SenseCount));
memcpy(scmd->sense_buffer, sense_data, sz);
#if defined(__VMKLNX__) #if defined(__VMKLNX__)
/*
* PR 794629: INQUIRY for specific pages on certain 3TB drives
* returns descriptor format sense data, which is not supported
* by ESX. As a workaround we translate descriptor format sense
* data into fixed format here to support this kind of drives.
*/
char *srcsense = (char *) sense_data;
if (scmd->cmnd[0] == 0x12 && scmd->cmnd[1] == 0x01 &&
scmd->cmnd[2] == 0x89 && ((srcsense[0] & 0x7F) >= 0x72)) {
if (sz > 3)
mpt_scsi_build_sense_buffer(0,
scmd->sense_buffer, (srcsense[1] & 0x0F),
srcsense[2], srcsense[3]);
else
memcpy(scmd->sense_buffer, sense_data, sz);
} else
memcpy(scmd->sense_buffer, sense_data, sz);
_scsih_normalize_sense((char *)scmd->sense_buffer, &data); _scsih_normalize_sense((char *)scmd->sense_buffer, &data);
#else #else
memcpy(scmd->sense_buffer, sense_data, sz);
_scsih_normalize_sense(scmd->sense_buffer, &data); _scsih_normalize_sense(scmd->sense_buffer, &data);
#endif #endif
/* failure prediction threshold exceeded */ /* failure prediction threshold exceeded */

View file

@ -849,7 +849,8 @@ static int usbdev_release(struct inode *inode, struct file *file)
list_del_init(&ps->list); list_del_init(&ps->list);
#if defined(__VMKLNX__) #if defined(__VMKLNX__)
if (file->f_mode & FMODE_WRITE) { if (file->f_mode & FMODE_WRITE) {
if (!dev->passthrough || !dev->in_use) { if ((!dev->passthrough && dev->descriptor.idVendor != USB_VENDORID_IBM) ||
!dev->in_use) {
dev_err (&dev->dev, "usbdev_release: USB passthrough " dev_err (&dev->dev, "usbdev_release: USB passthrough "
"device opened for write but not in use: %d, %d\n", "device opened for write but not in use: %d, %d\n",
dev->passthrough, dev->in_use); dev->passthrough, dev->in_use);

View file

@ -30,6 +30,82 @@
static inline void device_set_wakeup_capable(struct device *dev, int val) { } static inline void device_set_wakeup_capable(struct device *dev, int val) { }
#endif #endif
#if defined (__VMKLNX__)
/*
* Intel's Panther Point chipset has two host controllers (EHCI and xHCI) that
* share some number of ports. These ports can be switched between either
* controller. Not all of the ports under the EHCI host controller may be
* switchable.
*
* Since we are not currently shipping xHCI driver the ports should be switched
* over to EHCI to make sure that customers can use them even in cases when
* BIOS starts with these ports mapped to xHCI.
*/
#define USB_INTEL_XUSB2PR 0xD0
#define USB_INTEL_XUSB2PRM 0xD4
#define USB_INTEL_USB3_PSSEN 0xD8
#define USB_INTEL_USB3PRM 0xDC
static bool usb_is_intel_switchable_ehci(struct pci_dev *pdev)
{
return pdev->class == PCI_CLASS_SERIAL_USB_EHCI &&
pdev->vendor == PCI_VENDOR_ID_INTEL &&
pdev->device == 0x1e26; /* PantherPoint EHCI */
}
static bool usb_is_intel_switchable_xhci(struct pci_dev *pdev)
{
return pdev->class == PCI_CLASS_SERIAL_USB_XHCI &&
pdev->vendor == PCI_VENDOR_ID_INTEL &&
pdev->device == 0x1e31; /* PantherPoint xHCI */
}
static void ehci_disable_xhci_ports(struct pci_dev *xhci_pdev)
{
u32 ports, old_ports, mask;
/*
* First disable SuperSpeed terminations on all ports that
* can be controlled (as indicated by the BIOS).
*/
pci_read_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN, &ports);
pci_read_config_dword(xhci_pdev, USB_INTEL_USB3PRM, &mask);
pci_write_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN,
ports & ~(mask & 0xf));
/*
* Write XUSB2PR, the xHC USB 2.0 Port Routing Register, to
* switch the USB 2.0 power and data lines over to the xHCI
* host.
*/
pci_read_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, &old_ports);
pci_read_config_dword(xhci_pdev, USB_INTEL_XUSB2PRM, &mask);
ports = old_ports & ~(mask & 0xf);
pci_write_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, ports);
pci_read_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, &ports);
dev_info(&xhci_pdev->dev,
"USB3.0 ports routing map: 0x%x (was x0%x)\n",
ports, old_ports);
}
static void ehci_disable_xhci_companion(void)
{
struct pci_dev *companion = NULL;
/* The xHCI and EHCI controllers are not on the same PCI slot */
for_each_pci_dev(companion) {
if (usb_is_intel_switchable_xhci(companion)) {
ehci_disable_xhci_ports(companion);
return;
}
}
}
#endif
/*-------------------------------------------------------------------------*/ /*-------------------------------------------------------------------------*/
/* called after powerup, by probe or system-pm "wakeup" */ /* called after powerup, by probe or system-pm "wakeup" */
@ -59,6 +135,11 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
u32 temp; u32 temp;
int retval; int retval;
#if defined(__VMKLNX__)
if (usb_is_intel_switchable_ehci(pdev))
ehci_disable_xhci_companion();
#endif
switch (pdev->vendor) { switch (pdev->vendor) {
case PCI_VENDOR_ID_TOSHIBA_2: case PCI_VENDOR_ID_TOSHIBA_2:
/* celleb's companion chip */ /* celleb's companion chip */
@ -325,6 +406,11 @@ static int ehci_pci_resume(struct usb_hcd *hcd, bool hibernated)
unsigned long flags; unsigned long flags;
#endif #endif
#if defined(__VMKLNX__)
if (usb_is_intel_switchable_ehci(pdev))
ehci_disable_xhci_companion();
#endif
// maybe restore FLADJ // maybe restore FLADJ
if (time_before(jiffies, ehci->next_statechange)) if (time_before(jiffies, ehci->next_statechange))

View file

@ -795,11 +795,16 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
/* if the transport provided its own sense data, don't auto-sense */ /* if the transport provided its own sense data, don't auto-sense */
if (result == USB_STOR_TRANSPORT_NO_SENSE) { if (result == USB_STOR_TRANSPORT_NO_SENSE) {
#if defined(__VMKLNX__)
VMK_ASSERT(srb->sense_buffer[0]);
#endif
srb->result = SAM_STAT_CHECK_CONDITION; srb->result = SAM_STAT_CHECK_CONDITION;
last_sector_hacks(us, srb); last_sector_hacks(us, srb);
#if defined(__VMKLNX__)
if(unlikely(srb->sense_buffer[0] == 0)) {
_VMKLNX_USB_STOR_MSG("srb->sense_buffer[0] unexpectedly 0 for srb %p.fake Illegal Request sense_data\n",srb, srb);
memcpy(srb->sense_buffer, usb_stor_sense_invalidCDB,sizeof(usb_stor_sense_invalidCDB));
}
#endif
return; return;
} }
@ -979,6 +984,15 @@ Retry_Sense:
/* set the result so the higher layers expect this data */ /* set the result so the higher layers expect this data */
srb->result = SAM_STAT_CHECK_CONDITION; srb->result = SAM_STAT_CHECK_CONDITION;
#if defined(__VMKLNX__)
if(unlikely(srb->sense_buffer[0] == 0)) {
_VMKLNX_USB_STOR_MSG("srb->sense_buffer[0] unexpectedly 0 for srb %p fake Illegal Request sense_data\n", srb, srb);
memcpy(srb->sense_buffer,
usb_stor_sense_invalidCDB,
sizeof(usb_stor_sense_invalidCDB));
}
#endif
/* We often get empty sense data. This could indicate that /* We often get empty sense data. This could indicate that
* everything worked or that there was an unspecified * everything worked or that there was an unspecified
* problem. We have to decide which. * problem. We have to decide which.

View file

@ -1,5 +1,5 @@
/* /*
* Portions Copyright 2008 - 2010 VMware, Inc. * Portions Copyright 2008 - 2011 VMware, Inc.
*/ */
/* /*
* INET An implementation of the TCP/IP protocol suite for the LINUX * INET An implementation of the TCP/IP protocol suite for the LINUX
@ -583,6 +583,7 @@ struct net_device
/* Net device features */ /* Net device features */
unsigned long features; unsigned long features;
#if defined (__VMKLNX__) #if defined (__VMKLNX__)
#define NETIF_F_PSEUDO_REG 0x10000000000 /* PF uplink registered as pseudo. */
#define NETIF_F_UPT 0x100000000 /* Uniform passthru */ #define NETIF_F_UPT 0x100000000 /* Uniform passthru */
#define NETIF_F_HIDDEN_UPLINK 32768 /* Uplink hidden from VC. */ #define NETIF_F_HIDDEN_UPLINK 32768 /* Uplink hidden from VC. */
#define NETIF_F_SW_LRO 16384 /* Software LRO engine. */ #define NETIF_F_SW_LRO 16384 /* Software LRO engine. */
@ -896,9 +897,14 @@ struct net_device
void *default_net_poll; void *default_net_poll;
struct napi_wdt_priv napi_wdt_priv; struct napi_wdt_priv napi_wdt_priv;
struct vlan_group *vlan_group; struct vlan_group *vlan_group;
void *unused; /* this is not used anymore, if you
need a pointer in this struct /* This is being used by pnics which register as pseudo-devices. These
without breaking binary feel free! */ * drivers save the dev->pdev in this field, prior to setting it to
* null. These device drivers also set the DEVICE_PSEUDO_REG bit in the
* dev->features field if they use the earlier "unused" field in this
* manner.
*/
void *pdev_pseudo;
void *pt_ops; void *pt_ops;
void *cna_ops; void *cna_ops;
unsigned long netq_state; /* rx netq state */ unsigned long netq_state; /* rx netq state */

View file

@ -1,5 +1,5 @@
/* ********************************************************** /* **********************************************************
* Copyright 1998, 2010 VMware, Inc. All rights reserved. * Copyright 1998, 2010, 2011 VMware, Inc. All rights reserved.
* **********************************************************/ * **********************************************************/
/* /*

View file

@ -6789,7 +6789,17 @@ GetNICDeviceProperties(void *clientData, vmk_UplinkDeviceInfo *devInfo)
dev = (struct net_device *)clientData; dev = (struct net_device *)clientData;
pdev = dev->pdev; pdev = dev->pdev;
if (pdev == NULL) { if (dev->features & NETIF_F_PSEUDO_REG) {
// If physical device but registered as a pseudo-device,
// get the actual pdev from dev->pdev_pseudo (saved by the
// NIC driver).
VMK_ASSERT(pdev == NULL);
pdev = (struct pci_dev *)dev->pdev_pseudo;
VMKLNX_WARN("PCI device registered as pseudo-device %u:%u:%u.%u",
pci_domain_nr(pdev->bus), pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
}
else if (pdev == NULL) {
/* /*
* Pseudo NICs don't have PCI properties * Pseudo NICs don't have PCI properties
*/ */
@ -6824,6 +6834,12 @@ GetNICDeviceProperties(void *clientData, vmk_UplinkDeviceInfo *devInfo)
goto out; goto out;
} }
// If it is a physical device being registered as a pseudo-device,
// return here prior to other setup.
if (dev->features & NETIF_F_PSEUDO_REG) {
return VMK_OK;
}
/* Most constraints don't apply so set them to zero. */ /* Most constraints don't apply so set them to zero. */
memset(&devInfo->constraints, 0, sizeof(devInfo->constraints)); memset(&devInfo->constraints, 0, sizeof(devInfo->constraints));
devInfo->constraints.addressMask = pdev->dma_mask; devInfo->constraints.addressMask = pdev->dma_mask;
@ -7302,6 +7318,10 @@ LinNet_ConnectUplink(struct net_device *dev, struct pci_dev *pdev)
connectInfo.flags = 0; connectInfo.flags = 0;
} }
if (dev->features & NETIF_F_PSEUDO_REG) {
connectInfo.flags |= VMK_UPLINK_FLAG_PSEUDO_REG;
}
if (vmk_UplinkRegister((vmk_Uplink *)&dev->uplinkDev, &connectInfo) != VMK_OK) { if (vmk_UplinkRegister((vmk_Uplink *)&dev->uplinkDev, &connectInfo) != VMK_OK) {
goto fail; goto fail;
} }

View file

@ -1,5 +1,5 @@
/******************************************************************************** /********************************************************************************
* Copyright 2008, 2009, 2010 VMware, Inc. All rights reserved. * Portions Copyright 2008 - 2011 VMware, Inc. All rights reserved.
*******************************************************************************/ *******************************************************************************/
/****************************************************************** /******************************************************************