Pages

Saturday, May 13, 2017

Installing MATE desktop environment on Fedora 25


(1) sudo dnf install @mate-desktop-environemnt

After installing Cinnamon, among the default Gnome3, the transaction fails at:

The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction check error:
  file /usr/share/man/man1/vim.1.gz from install of vim-common-2:8.0.586-1.fc25.x86_64 conflicts with file from package vim-minimal-2:7.4.1989-2.fc25.x86_64

Error Summary
-------------


This appears to be a long standing bug, not realted to MATE desktop environment install but rather with the conflict between vim-minimal and the update which can be seen here:  https://bugzilla.redhat.com/show_bug.cgi?id=1329015

Solution, running:

[root@system76-f25 ~]# dnf update -v --debugsolver vim-minimal -y

Updated vim and then MATE installed successfully. 

Lesson - always update before install new software.

Friday, May 12, 2017

Arc Touch Bluetooth Mouse in Fedora 25

Had some issues using the default bluez bluetook GUI within Fedora 25 connecting to the Arc Touch Bluetooth mouse in Fedora 25.  The bluetooth GUI would show "Not Setup" and no amount of clicking would allow me to perform any actions on the mouse.

Then I found bluetoothctl, which when ran from the command line allows you to perform actions with devices the bluetooth adapter can see to which it can pair and connect.

# sudo bluetoothctl
[bluetooth] devices
[bluetooth] trust [MAC of mouse]
[bluetooth] pair [MAC of mouse]

Then the prompt will change to [Arc Touch BT Mouse] and you can run info to show the current status.

[Arc Touch BT Mouse]# info
Device D2:23:01:9B:F3:65
    Name: Arc Touch BT Mouse
    Alias: Arc Touch BT Mouse
    Appearance: 0x03c2
    Icon: input-mouse
    Paired: yes
    Trusted: yes
    Blocked: no
    Connected: yes
    LegacyPairing: no
    UUID: Generic Access Profile    (00001800-0000-1000-8000-00805f9b34fb)
    UUID: Generic Attribute Profile (00001801-0000-1000-8000-00805f9b34fb)
    UUID: Device Information        (0000180a-0000-1000-8000-00805f9b34fb)
    UUID: Battery Service           (0000180f-0000-1000-8000-00805f9b34fb)
    UUID: Human Interface Device    (00001812-0000-1000-8000-00805f9b34fb)
    Modalias: usb:v045Ep0804d0001


It does appear that there is some sort of use timeout period where the connection will disconnect but then activating the mouse by moving it reconnects it, need to investigate if this is some udev rule or setting to prevent the timeout from occuring.


Saturday, August 20, 2011

DS4243 Disk Shelves


  • DS4243 FRUs, connectors, controllers & switches
  • Configuration and tasks related to DS4243
  • Field Replaceable Units (FRUs) related to the DS4243
    • The DS4243 is a FRU in an of itself comprised of multiple other FRUs:
      • Chassis
        • 4U weighing 110lbs
      • PSUs
        • two required for SATA drive shelves
          • two empty slots must contain blank PSUs to provide air flow
        • four required for SAS drive shelves
        • Labeled 1-4 from left to right, top to bottom
      • IOM3
        • input-output modules
        • provide multi-path high availability
        • each IOM3 contains two ACP ports and two SAS ports
      • Ear Covers
        • left and right ear covers; left is for the digital readout of the shelf ID as well as the shelf LEDs.  The shelf-ids are under the right ear cover.
        • hidden by left ear cover is shelf ID switch, must power cycle shelf to have new shelf ID take effect
      • HDDs
        • supports 24 disk drives, labeled 0-24 from top left to bottom right\
        • SAS or SATA but not mixed, however different shelves in same stack can be mixed, use blank panels for empty slots to ensure proper air flow
      • SAS HBAs (controller)
        • quad-port PCIe
          • in 3000+ series FAS/V models
        • dual-port PCIe mini
          • only supported in FAAS2050 controller
      • QSFP Copper Cables
        • used to connect SAS HBAs to controller or other disk shelves IOM3 SAS PCIe controller
      • Ethernet (ACP) Cables
        • extra shielding for increased EMI performance
        • from IOM3 to IOM3 or IOM3 to controller
      • Cable connectors
        • QSFP & 10/100
          • Quad-small-form factor pluggable connectors are keyed, and should be gently inserted into IOM3 SAS ports.
          • 10/100 connect 10Mbps/100Mbps provide ACP Ethernet network
        • two 10/100 and two QSFP on each IOM3
          • QSFP is 4-wide meaning 4 lanes per port times 3Gbps for max of 12Gbps
      • LEDs
        • 51 per chassis
        • 3 shelf LEDs and 48 disk drive LEDs
          • Disk LEDs:  Power (top) Fault (bottom)
          • shelf LEDs: Power (top), Fault (middle), Activity (bottom)
        • PSU LEDs
          • each PSU has 4 LEDs:
            • AC Fail, PCM OK, Fan Fail, DC Fail
        • IOM3 LEDs
          • each IOM3 has 7 LEDs: IOM fault, Ethernet Activity & Ethernet Up/Down, then SAS activity

Friday, August 19, 2011

ASUP options for NCEP baseline filer


Autosupport (ASUP) settings for AWIPS NetApp FAS3160A:

CCC = siteID (e.g. NHCN)
RR = AWIPS DNS zone (e.g. sr)
XX = Site LAN subnet

AutoSupport option
AWIPS Setting
autosupport.cifs.verbose
off
autosupport.content
minimal
autosupport.doit
DONT
autosupport.enable
on
autosupport.from*
nas[1|2]-CCC@RR.awips.noaa.gov
autosupport.local.nht_data.enable
on
autosupport.local.performance_data.enable
off
autosupport.mailhost**
165.92.XX.[5|6]
autosupport.minimal.subject.id
hostname
autosupport.nht_data.enable
on
autosupport.noteto
BLANK
autosupport.partner.to
BLANK
autosupport.performance_data.enable
off
autosupport.retry.count
15
autosupport.retry.interval
4m
autosupport.support.enable
off
autosupport.support.proxy
BLANK
autosupport.support.to
autosupport@netapp.com
autosupport.support.transport
smtp
autosupport.support.url
support.netapp.com/asupprod/post/1.0/postAsup
autosupport.throttle
on
autosupport.to***
netapp@dx[5|6].RR.awips.noaa.gov

*autosupport.from is the FQDN of the controller
**nas1 smtp mailhost is dx5; nas2 smtp mailhost is dx6
***autosupport.to is user netapp at FQDN of mailhost

Replacing a failed disk on a system without spares with a non-zeroed disk from another system

Sometimes we have to add disks from another system, or disks that are non-zeroed spares, to replaced a failed disk in a system either without a spare or with more than one failed in a single aggregate.


This is fairly simple, first, your replacement disk needs to be of the same speed as the ones contained in the aggregate.  DataONTAP doesn't allow mixed speeds for performance reasons by default.  You can change the setting to allow mix-speed drives in the same aggregate by changing raid.rpm.fcal.enable to allow mixed but this isn't recommended.


First replace the failed disk with the replacement disk.


If the replacement disk already is a member of a foreign volume or aggregate it will show up appended with a "(1)", for example, if your replacement disk is from a system's vol0 it will show up in the new system as vol0(1) and will be brought in "offline" as noted as "foreign."  This is helpful for the next step.


Locate the new disk in the system, if it is a member of a foreign volume or aggregate list it with:  vol status or aggr status and note the volume or aggregate name it is a member of, e.g. vol0(1).


rsyncnas-ancf> vol status

         Volume State      Status            Options
       vol0(1) offline     raid_dp, flex     fs_size_fixed=on
            foreign        degraded
           vol0 online     raid_dp, flex     root
            irt online     raid_dp, flex     nosnapdir=on, fs_size_fixed=on
           ndfd online     raid_dp, flex     nosnapdir=on, fs_size_fixed=on


First you need to destroy the volume the new disk was previously a member of,


> vol destroy vol0(1)

BE VERY CAREFUL HERE ... double check to make sure that you enter the foreign volume and not the system's root volume.  


Once the volume is destroyed, the disk will now become a spare, you can list the spares on the system via:  vol status -s or aggr status -s.  At this point you will need to zero the spare disk with:


> disk zero spares


Once the disk zeroes it is ready to be readded to the system.  Assuming previously you did not have a spare.  If you lost a dparty disk, the RAID will have degraded down from a raid_dp to raid4 via DataONTAP automechanisms.  You can change the RAID on that aggregate from which the failed disk came via:


> aggr options aggr1 raidtype raid_dp


DataONTAP will automatically begin rebuilding/reconstructing the RAID with the spare you just added.


If a disk failed when you were in the process of failing over the resources from one controller to another, and you failed back before the disk was reaplced, sometimes DataONTAP will assign that new spare to the controller that originally owned the failed disk, but the reconstruct will have occurred on the partner node when the cluster was in takeover mode.  Therefore, the partner node will have one fewer spares, but when you insert the new disk it will be assigned to the node to which the original failed disk belong, and so that partner will have one too many spares.  


To reassign a spare disk to a partner controller:


> disk assign 1a.10.11 -o nas2-nhcn -f


Assuming 1a.10.11 is a spare and the partner nodename is nas2-nhcn.  You can use the partner systemID with the -s option.

Monday, August 8, 2011

SAN Implementation Storage System Configuration

FC and IP SAN Storage System Configuration Basics

FC SAN:

[1] Verify SAN topology:

fcp config command used to ensure that the ports are configured according to the requirements decided upon during the SAN design phase.  If onboard FC ports are being used, the FC port behavior may have to be set.  If the FC ports connect to disk shelves, the port type needs to be set to FC Initiator while if they connect to the SAN fabric or the host, the behavior needs to be set to FC target.

[2] Enable FCP protocol:

First run the license command to see the available licenses.  You can use license add to add subsequent licenses.  To enable the FCP use fcp start command.  Finally, to verify the status use fcp status command.

[3] Port behvior set:

The adapter must be taken offline.  Use fcp config command followed by the adapter name and the down option.  To set the behavior of the FC ports use the fcadmin config -t [initiator|target]  command.  In order for changes to take affect the system must be rebooted.


[4] cfmode checked and set:

To check which cfmode is being used use the fcp show cfmode command.  There are several things to consider, [1] storage system, [2] host operating system, [3] DataONTAP version, [4] topology features.  There are 4 cfmodes:  [1] Single_Image; [2] partner; [3] dual fabric; [4] stand by.  The Single_Image is the default and recommended for DataONTAP 7.2+ systems.

To set cfmode, the privilge level commands need to be accessed:  priv set advanced
Then the FCP must be stopped: fcp stop
Next use the fcp set cfmode command to change modes
Then restart the FCP with fcp start


[5] WWPN:

To recorde the FC WWPN use:  fcp show adapters and fcp show nodename commands


[6] Check port configuration:

Verify the FC ports are online and speed and media type are correct, use the DataONTAP fcp config commands to verify these values.  By default they are set to AUTO for autonegotiate.  You can manually set the type if the FC switch port or host cannot autoneg (rare).


[7] Create aggr/vol:

Create appropriate aggr and volumes, ensure sizes of volumes is large enough for all LUNs and snapshots if using.  Also, the Unicode format must be enabled for volumes containing LUNs: vol options create_unicode on 




iSCSI SAN:


[1] Check Ethernet interfaces:

To bring up the interfaces, use ifconfig -a to view available interfaces.  Run the ifconfig up|down to bring up the interfaces.  Ensure proper configuration with IPADDR and NETMASK, ensure speed set to autoneg or to same speed as network.  If these changes need to be persisted across reboots, the /etc/rc and /etc/hosts files need to be updated.

[2] License and enable iSCSI:

Run the license command to see available package licenses.  Run the iscsi start, iscsi status commands to start and verify the startup of iSCSI on the filer.

[3] Configure iSCSI Ethernet interfaces:

Now check that iSCSI traffic has been enable of ethernet interfaces.  These interfaces can be up, but disabled on Ethernet traffic.  It is recommended to separate iSCSI traffic be separated from general TCP/IP traffic on Ethernet connections.  Thus it is encourage to disable iSCSI traffic on e0m if being used for RLM.  Also disable on other ports used for general TCP/IP traffic and enable on those Ethernet interfaces dedicated for iSCSI traffic.  Have host-side iSCSI initiators and storage-side iSCSI targets connected to a separate network is best practice.

Run the iscsi interface enable  command and speficy the interface on which iSCSI traffic will use.  Run the iscsi interface show command to see if iSCSI traffic has been enabled on the correct interface(s).

[4] Verify target portal groups:

Verify interfaces are assigned to valid iSCSI Target Portal Group (TPG).  By default DataONTAP assigns each iSCSI interface to its own default TPG to allow multiple iSCSI paths to each LUN.  To view available iSCSI TPGs and which iSCSI target interfaces are assigned to each TPG use iscsi tpgroup show.  Then to create a target portal group, run the iscsi tpgroup create .

[5] Create aggr/vols:


Finally create aggr/vols similarly to above FC SAN creation.

NetApp -- Flash Cache

FLASH CACHE BASICS

PCIe expansion card used for scalabe read cache in NetApp storage systems
Enables a disk limited disk storage system to achieve it's max I/O potential, using fewer disks to achieve the maximum I/O thus uses less resources (power, rack space, money).
Helps achieve lower read latency due to faster access times of the solid state memory
Flash cache hits reduce latency by a factor of 10 for reads

Specifications:
Standard height, 3/4" x8 PCIe card
72 NAND flash chips, 36 on each side of the card, the density is different for two different sized cards, the 256GB version has 72 32-Gb flash chips while the 512-GB version has 64-Gb flash chips.
Each card consumes a single PCIe Gen1 connection
256-GB & 512-GB cards are supported by 7.3.2+ DataONTAP (NOTE not supported in 8.0, only 8.0.1+)

Under the aluminum heat sink ... in the center of the card there is a custom design controller on the PCI bracket at the end of the flash cache card.

Indicators:
Two LEDs are located on the cards -- you can see the LEDs from the back on the controller through the perforated PCIe cover.

The Amber LED should be off under normal operation, if it is on there is a problem with the card, and it is taken off-line when a fault is detected.
The Green LED indicates activity, and provides a heartbeat indicate blinking at 0.5Hz, and the blink rates are based on the I/O rate of the card as follows:  0.5Hz (< 1,000 I/O per second); 1.4Hz (1,000-10,000); 5.0Hz (10,000 - 100,000); 10.0Hz (>100,000).

Power:

Draws all required power from the 12V rail on PCI connector
18W power consumption which is below the 25W max consumption required by all PCIe supported platforms
10C-40C ambient temperature operating environment, lower than most PCIe components
Improved air-flow allows memory components to operate under lower ambient temperatures
95% less electricity than a shelf of 14 10,000RPM disk shelves (which is what card allows to be eliminated)

Flash Cache Field Programmable Gate Array (FPGA)

One x8 PCIe 1.1 Interface
DMA access engine
Four independent 72bit async NAND interfaces with 18 flash devices
The flash data interfaces that connect to the flash devices are capable of running at 40MHz so the raw bandwidth of the card is 1.28GB/s so when one is busy another can take up the reads
An interfaces can operate on 9 flash devices in parallel at a time
Each of 18 flash devices on interfaces contains multiple 8GB NAND cores
256GB has 288 cores, 4 cores/device
512GB has 576 cores, 8 cores/device


FPGA Low Level Specifics:
Each NAND core is made up of blocks and cache wears out in increments of blocks
Each block contains pages, and pages are units of storage that data can be written into and read from
Across 9 parallel cores, 8 cores are for data, one if for parity (this is a bank)
8 banks for 256GB
If a NAND core losses too many blocks, it can be taken out of use without functional distruption
A DMA engine supports one Write and Erase queue per flash interfaces
Multithreaded DMA engines ensure non-volatile, supports 8 read queues for each interface
DMA engine supports 520byte sectors, the flash controller controls the write operates to write memory and informs the flash controller of any issues
If a WAFL from flash cache fails because of an uncorrectable BCH error in flash memory than the data is fetched from disk
Flash memory contents are protected by 4bit BCH codes
If a core fails the card continues to operate without any loss of capacity using parity to reconstruct data
If an entire bank of cores fails, the card continues to work with a reduced capacity

The Dynamic interrupt mechanism speeds up or slows down to met host processing rate and upgradable from backup image and can match the power or thermal limits of the platform

Flash Cache FPGA Enhanced Resiliency
Wear Leveling: uses algorithms to ensure each block receives equal amount of wear
Bad Block Detection and Remapping: FPGA monitors and IDs worn out blocks, failed blocks replaced by FPGA
BCH Error Correction Engine: soft errors during reads handled here
Protection RAID: 8 data and 1 parity chips, can tolerate the loss of an entire chip
Dynamic Flash Remapping and Reduction: if two chips from same bank are loss, software can map out that region of memory, when significant portion is loss, ASUP message generated
WAFL Checksums: additionally software stores chksum with every WAFL block, if fails on read data from flash cache is discarded and data is obtained then from disk

Flash Cache Subsystem:

WAFL:  helps reduce the demand for random disk reads by reading user data and metadata from the external cache, it interfaces with the WAFL filesystem, and then controls and tracks the cache state.

WAFL External Cache (EC): is a software module that is used to cache WAFL data in external memory cards.  The EC can be used with either PAM1 or Flash Caches (>7.3.2).  Also supports Predictive Cache Statistics (PCS).  Contains three flow control processes: primary cache eviction, cache lookup, and I/O completion.  Per 0.5TB of Flash Cache card, 1.5 to 2.0 GB are preallocated for tag storage in the storage system main memory.

Flash Adaptation Layer (FAL):  is responsible for mapping a simple block address space on to one or more Flash Cache cards.  The FAL can manage cache writes in a way that produces excellent wear leveling, load balancing, and throughput while minimizing read variance that is caused by resource conflicts.  The FAL transparently implements bad block mapping, which gradually reduces flash capacity as flash blocks wear out.  Models flash memory as a single circular log across all blocks on cache.  Blocks must be erased before overwritten.  The number of erasures are limited, therefore wear leveling is important.  Round-robin scheduling of writes.  Reads and writes passed on to the Flash Cache driver.  Achieves wear leveling by placing EC writes in a circular log within a bank.

Flash Cache Driver (FCD):  manages all comms with the Flash Cache hardware, including request queues, interrupts, fault handling, initialization and FPGA.  Manages all Flash Cache cards, multiple cards are aggregated behind this FCD interface.  Provides memory unification, load balancing, and queuing across all cards.  Communicates through EMS by issuing messages for hardware status and error messages.  Automatically enabled when hardware detected.

Flash Cache Hardware:  The card itself.


Bad Blocks:
Two copies, bad block discovery table, not stored in each flash block, ensure only one bad block table erased at a time, on power-up driver goes through discovery, since table is kept, initial power-up time reduced.

Flash Management Module
operates at a higher level, viewing the components of a Flash Cache as domains.
These domains are interfaces, flash banks, lanes, blocks, and cores.
FMM assists in maintaining availability and providing serviceability.  These aspects are monitored by the FMM when the storage system boots up.  The FMM begins running when the storage system boots up and immediately begins discovering flash devices.  FMM is enabled by default in the Data ONTAP operating system.  When a flash device, such as Flash Cache, is discovered, the driver of the flash device registers it with the FMM for reliability, availability, and serviceability (RAS).

TROUBLESHOOTING, INSTALLING, DIAGNOSTICS

Shut down controller
Open storage system
Remove an exisiting module if necessary
install flash cache card
close and boot system
run diagnostics on the new flash cache card (for first time install)
(also enable WAFL EC software and configuration options for first time install)
complete the installation process

Enable WAFL external cache software liscense:   license ass
Enable WAFL external cache software: options flexscale.enable
If AA perform on both systems

Run sysconfig -v to show slots in which cache is installed.  Three states, "Enabled|Disabled|Failed".  Further details of the failed state may also be listed if the state is failed, e.g. "Failed Firmware".

WAFL EC Config Options:

Cache normal user data blocks
Cache low-priority user data blocks
Cache only system metadata
To integrate the FlexShare QoS tool's buffer cache policies with WAFL external cache options use the priority command.

Default Flash Cache configuration:
options flexscale
flexscale.enable on
flexscale.lopri_blocks off << recommended to turn this on
flexscale.normal_data_blocks on


Note, when caching normal data, until the Flash Cache card is 70% full, all the options of caching are turned on and after the Flash Cache card is 70% full, the set configurations are identified and used.  In the Flash Cache default caching mode, a block is cached when it is evicted from the main memory cache.  When the data is accessed at a later time, the data is obtained from the Flash Cache card, which is larger than the main memory.  In this mode, Flash Cache acts like the main memory.

Flash Cache caches the file data and metadata.  Metadata is not displayed as an option for the options command because metadata is cached instantly.  Metadata is the data that is used to maintain the file-level data structure and directory structure for NFS and CIFS data.  In the default mode, Flash Cache also caches normal data, which primarily consists of the random reads.

The recommended configuration for Flash Cache is to turn on caching for normal data blocks and for low-priority data, which includes random reads and some of the writes.

Predictive Cache Statistics:

Uses sampling approach instead of using an entirely new EC tag store which PAM1 does.
Sample and only allocated and updates the sampled portion of the tagstore and thus reduces CPU and memory usage.  Sampling rate in 1% and 2%, the default is 1%.

options flexscale
flexscale.enable pcs
flexscale.pcs_high_res off <<< turn on to use 2%
flexscale.pcs_size 1024GB <<< can change to test if more cache would help

LED Notifications:
Fault LED amber off // Green LED blinking -- NORMAL
Fault LED amber on // Green LED blinking -- hardware OK, but software taken off-line
Fault LED amber on // Green LED solid -- unknown problem, hardware problem, may need replaced
Fault LED amber off // Green LED off -- hardware problem with power supply -- replace
Fault LED amber on // Green LED off -- hardware problem with FPGA config -- replace

Use sysconfig -v, EMS logs, and stats show ext_cache_obj command to display the flash cards that the Flash Cache is using and the blocks that can be stored for each Flash Cache card.  Each card can store 135 billion 4k blocks.

FMM generates ASUP email notification messages.  ASUP needs to be enabled on the storage system, /etc/log/fmm_data is where information and settings are stored.  Case types are DEGRADED, OFFLINED, FAILED.