dev_vdisk: ***ERROR***: FLUSH bio failed: -121 (cmd ffff8801efd16ac8)

93 views
Skip to first unread message

Esos-User123

unread,
Apr 19, 2017, 9:54:17 AM4/19/17
to esos-users
Hi,

we have an problem with our setup.
We are using the esos as a FC-target for just one Server.

-vdisk_blockio with default blocksize (512) ans settings
-1Target/2Initiators configured as Windows MPIO
-HBA: qle2462
-ESOS-Version:1.0.0

dmesg:
[76076.208996] sd 0:0:0:0: [sda] tag#311 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[76076.208999] sd 0:0:0:0: [sda] tag#311 Sense Key : Illegal Request [current]
[76076.209009] sd 0:0:0:0: [sda] tag#311 Add. Sense: Invalid command operation code
[76076.209015] sd 0:0:0:0: [sda] tag#311 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[76076.209017] blk_update_request: critical target error, dev sda, sector 0
[76076.209018] dev_vdisk: ***ERROR***: FLUSH bio failed: -121 (cmd ffff8801efd14a48)
[76076.209417] sd 0:0:0:0: [sda] tag#311 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[76076.209420] sd 0:0:0:0: [sda] tag#311 Sense Key : Illegal Request [current]
[76076.209422] sd 0:0:0:0: [sda] tag#311 Add. Sense: Invalid command operation code
[76076.209425] sd 0:0:0:0: [sda] tag#311 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[76076.209426] blk_update_request: critical target error, dev sda, sector 0
[76076.209428] dev_vdisk: ***ERROR***: FLUSH bio failed: -121 (cmd ffff8801efd15a88)
[76076.212353] sd 0:0:0:0: [sda] tag#311 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[76076.212355] sd 0:0:0:0: [sda] tag#311 Sense Key : Illegal Request [current]
[76076.212358] sd 0:0:0:0: [sda] tag#311 Add. Sense: Invalid command operation code
[76076.212361] sd 0:0:0:0: [sda] tag#311 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[76076.212362] blk_update_request: critical target error, dev sda, sector 0
[76076.212363] dev_vdisk: ***ERROR***: FLUSH bio failed: -121 (cmd ffff8801efd16108)


Sometimes the connection get lost and the server is unable to reconnect to the FC-Target.
Do you have some ideas, what happends and who we can fix it.


Best Regards,
EU123



/etc/scst.conf:
__________________________________________________
# Automatically generated by SCST Configurator v3.2.0-pre1.   
                                                              
# Non-key attributes                                          
max_tasklet_cmd 10                                            
poll_us 0                                                     
setup_id 0x0                                                  
suspend 0                                                     
threads 4                                                     
                                                              
HANDLER vdisk_blockio {                                       
        DEVICE NAS5 {                                         
                filename /dev/disk/by-id/scsi-20b9ca35e00d00000
                                                 
                # Non-key attributes             
                block "0 0"                      
                blocksize 512                    
                cluster_mode 0                   
                expl_alua 0                      
                nv_cache 0                       
                pr_file_name /var/lib/scst/pr/NAS5
                prod_id NAS5                  
                prod_rev_lvl " 321"           
                read_only 0                   
                removable 0                   
                rotational 1                  
                size 7988639170560            
                size_mb 7618560               
                t10_dev_id 543d2380-NAS5      
                t10_vend_id SCST_BIO          
                thin_provisioned 0            
                threads_num 1                 
                threads_pool_type per_initiator
                tst 1                        
                usn 543d2380                 
                vend_specific_id 543d2380-NAS5
                write_through 0            
        }                                  
}                                          
                                           
TARGET_DRIVER copy_manager {               
        # Non-key attributes               
        allow_not_connected_copy 0         
                                           
        TARGET copy_manager_tgt {          
                # Non-key attributes            
                addr_method PERIPHERAL          
                black_hole 0                    
                cpu_mask f                      
                forwarding 0                    
                io_grouping_type auto              
                rel_tgt_id 0                       
                                                   
                LUN 1 NAS5 {                       
                        # Non-key attributes            
                        read_only 0                     
                }                                       
        }                                               
}                                                       
                                                        
TARGET_DRIVER iscsi {                                   
        enabled 0                                       
}                                                       
                                                        
TARGET_DRIVER qla2x00t {                                
        TARGET XX:XX:XX:XX:XX:XX:XX:XX  {                
               HW_TARGET                    
                                             
                enabled 1                    
                rel_tgt_id 1               
                                           
                # Non-key attributes       
                addr_method PERIPHERAL     
                black_hole 0               
                cpu_mask f                 
                explicit_confirmation 0    
                forwarding 0               
                io_grouping_type auto      
                node_name 20:00:00:24:ff:0c:7e:f9
                port_name 21:00:00:24:ff:0c:7e:f9
                                                
                GROUP nas {                     
                        LUN 0 NAS5 {            
                                # Non-key attributes
                                read_only 0        
                        }                          
                                                   
                        INITIATOR XX:XX:XX:XX:XX:XX:XX:XX 
                                                        
                        INITIATOR XX:XX:XX:XX:XX:XX:XX:XX 
                                                        
                        # Non-key attributes            
                        addr_method PERIPHERAL          
                        black_hole 0                    
                        cpu_mask f                      
                        io_grouping_type auto           
                }                                       
        }                                               
}                                                       

                       

Marc Smith

unread,
Apr 19, 2017, 10:42:26 AM4/19/17
to esos-...@googlegroups.com
Looks like something is wrong with your '/dev/sda' block device... I
assume that is what /dev/disk/by-id/scsi-20b9ca35e00d00000 resolves
to? What is that block device? A SCSI disk from a hardware RAID
controller? Can you access it inside ESOS (eg, fdisk -l /dev/sda)?

--Marc
> --
> You received this message because you are subscribed to the Google Groups
> "esos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to esos-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Esos-User123

unread,
Apr 20, 2017, 8:13:14 AM4/20/17
to esos-users

Hey,

yes we are using a Raid-Controller (ICP 5085BL) with 8 HDD (2TB).
The configuration is RAID 10.

fdisk -l /dev/sda says: "fdisk: device has more than 2^32 sectors, can't use all of them", because we are using a GPT-Partiontable.
But the device is visible and we are able to write more then 2TB on the Storage(as to the moment the OS is crashing).

Marc Smith

unread,
Apr 20, 2017, 9:04:55 AM4/20/17
to esos-...@googlegroups.com
On Thu, Apr 20, 2017 at 8:13 AM, Esos-User123 <bianco...@gmail.com> wrote:
>
> Hey,
>
> yes we are using a Raid-Controller (ICP 5085BL) with 8 HDD (2TB).
> The configuration is RAID 10.
>
> fdisk -l /dev/sda says: "fdisk: device has more than 2^32 sectors, can't use
> all of them", because we are using a GPT-Partiontable.
> But the device is visible and we are able to write more then 2TB on the
> Storage(as to the moment the OS is crashing).
>
>

That warning that fdisk displays is fine, what I was looking for was
if you could read data from the /dev/sda device inside of ESOS?
Double-check that your RAID controller is functioning properly and the
logical RAID10 device is accessible.

And checking for any other interesting kernel messages.

--Marc
Reply all
Reply to author
Forward
0 new messages