Manuale d’uso / di manutenzione del prodotto DSN-6420 del fabbricante D-Link
Vai alla pagina of 146
1 D-Link iSCSI IP SAN storage 10GbE iSCSI to SA T A II / SAS RAID IP SAN storage DSN-6410 & DSN-6420 User Manual Version 1.0.
2 Preface Copyright Copyright@2011, D-Link Corporation. All rights reserved. No part of this manual m ay be reproduced or transmitted without written permission fr om D-Link corporation. Trademarks All products and trade names use d in this ma nual are trademarks or registered trademarks of their respective companies.
3 Table of Contents Chapter 1 Overview .................................................................. 6 1.1 Features ......................................................................................... 6 1.1.1 Highlights .
4 4.4.1 Physical disk ........................................................................................................... 54 4.4.2 RAID group .........................................................................................
5 6.2 Event notifications ..................................................................... 125 Appendix 133 A. Certification list .......................................................................... 133 B. Microsoft iSCSI initiator .
6 Chapter 1 Overview 1.1 Features D-LINK DSN-6000 series IP SAN storage provides no n-stop service with a high degree of fault tolerance by usi ng D-LINK RAID technolog y and advanced array manageme nt features. DSN-6410/6420 IP SAN storage connects to the host system by iSCSI interface.
7 D-LINK DSN-6410/6420 feature highlights Host Interface 4 x 10GbE iSCSI ports (DSN-6420) 2 x 10GbE iSCSI ports (DSN-6410) Drive Interface 12 x SAS or SATA II RAID Controllers Dual-active RAID con.
8 RAID is the abbreviation of “R edundant A rray of I ndependent D isks ” . The basic idea of RAID is to combine multiple dr ives together to form one la rge logical drive. This R AID drive obtains performance, capacity and relia bility than a single drive.
9 in cache and actual writing to non-volatile media occurs at a later time. It speeds up system write perf ormance but needs to bear the risk where data may be i nconsistent between data cache and the physic al disks in one sh ort time interval. RO Set the volume to be R ead- O nly.
10 MTU M aximum T ransmission U nit. CHAP C hallenge H andshake A uthentication P rotocol. An optional security mechanism to control access to an iSCSI storage system over the iSCSI data ports. iSNS I nternet S torage N ame S ervice. Part 3: Dual controller SBB S torage B ridge B ay.
11 four hard drives. RAID 10 Striping over the member RAID 1 volumes. RAID 10 needs at least four hard drives. RAID 30 Striping over the member RAID 3 vo lumes. RAID 30 n eeds at least six hard drives. RAID 50 Striping over the member RAID 5 vo lumes.
12 1.3 iSCSI concepts iSCSI (Internet SCSI) is a pr otocol which encapsulates SCSI (S mall Computer System Interface) commands and data in TCP/IP packets for link ing storage devices with servers over common IP infrastructures. iSCSI provides high perfor mance SANs over standard IP networks like LAN, WAN or the Internet.
13 Hardware iSCSI HBA(s) provide its own initiator tool. Please refer to the vendors’ HBA user manual. Microsoft, Linux, Solaris and Mac provide iSCSI initia tor dr iver. Please contact D- LINK for the latest certification list . Below are the available links: 1.
14 The management port can be transferred smoothly to the other controller with the same IP address 6. Online firmware upgrade, no system down ti me (only for DSN-6420) 7. Multiple target iSCSI no des per controller support Each LUN can be attached to one of 32 nodes from each controller 8.
15 5. Instant volume configuration restoration 6. Smart faulty sector relocation 7. Hot pluggable battery backup module suppo rt Enclosure monitoring 1. S.E.S. inband manage ment 2. UPS management via dedicated serial port 3. Fan speed monitors 4.
16 Windows Linux Solaris Mac Drive support 1. SAS 2. SATA II (optional) 3. SCSI-3 compliant 4. Multiple IO transaction processing 5.
17 (EN55022 / EN55024) UL statement FCC statement This device has been shown to be in compliance with an d was tested in accordance with the measurement procedures specified in th e Standards and Spec.
18 The ITE is not intended to be installed and used in a ho me, school or public area accessible to the general population, and the t humbscrews should be tightened with a tool after both initial installation an d subsequent access to the panel.
19 Chapter 2 Installation 2.1 Package contents The package contains the following items: 1. DSN-6410/6420 IP SAN storage (x1) 2. HDD trays (x12) 3. Power cords (x4) 4. RS-232 cables (x2), one is for co nsole, the other is for UPS. 5. CD (x1) 6. Rail kit (x1 set) 7.
20 The drives can b e installed into any slot in the enclosure. Slot numbering will be reflected in web UI. Tips It is advisable to install at least on e drive in slots 1 ~ 4. System event lo g s are saved to drives in these slots; If no drives are fitted the event lo g s will be lost in the event of a system reboot.
21 2.3.3 Install drives Note : Skip this section if you purcha sed a solution populated with drives. To install SAS or SATA drives with no Bri dge Board use t he front mounting holes: To install SATA .
22 Figure 2.3.3.3 HDD tray description: HDD power LED: Green HDD is inse rted and good. Off No HDD. HDD access LED: Blue blinking HDD is accessing. Off No HDD. HDD tray handhold. Latch for tray kit removal.
23 Controller 2. (only on DSN-6420 ) Controller 1. Power supply unit (PSU1). Fan module (FAN1 / FAN2). Power supply unit (PSU2). Fan module (FAN3 / FAN4).
24 Figure 2.3.4.3 (DSN-6410 SFP+) Connector, LED and button description: 10GbE ports (x2). Link LED: Orange A s s e r t e d w h e n a 1 G l i n k i s e s t a b l i s h e d a n d maintained. Blue Asserted when a 10G link is establish and maintained.
25 BBM Status Button: When the system power is off, press the BBM status butto n, if the BBM LED is Green, then the BBM still has power t o keep data on the ca che. If not, then the BBM power is ran out and cannot keep the data on the cache anymore.
26 2.5 Deployment Please refer to the followi ng topology and have al l the connections ready. Figure 2.5.1 (DSN-6420) Figure 2.5.2 (DSN-6410) 1. Setup the hardware connection before powe r on servers. Connect console cable, management port cable, and iSCSI data port cables in advance.
27 2. In addition, installing an iSNS server is recommended for dual controller system. 3. Power on DSN-6420/6410 and DSN-6020 (optional) first, and then power on hosts and iSNS server. 4. Host server is suggested to logon the target twice (both co ntroller 1 and controller 2), and then MPIO should be setup automatically.
28 Figure 2.5.4 1. Using RS-232 cable for consol e (back color, phone jack to DB9 female) to connect from controller to management PC directly. 2. Using RS-232 cable for UPS (gray color, ph one jack to DB9 male ) to connect from controller to APC Smart UPS serial cable (DB9 female side), and then connect the serial cable to APC Smart UPS.
29 Chapter 3 Quick setup 3.1 Management interfaces There are three manageme nt methods to manage D-LINK IP SAN storage, describe in the following: 3.1.1 Serial console Use console cable (NULL modem cable) to connect from console port of D-LINK IP SAN storage to RS 232 port of management PC.
30 3.1.3 Web UI D-LINK IP SAN storage supports graphic user i n terface (GUI) to oper a t e . B e s u r e t o connect the LAN cable. Th e default IP setting is DHCP ; open the browser and enter: http://192.168.0.32 And then it will pop up a dialog for authentication.
31 Indicator description: RAID light: Green RAID works well. Red RAID fails. Temperature light: Green Temperature is normal. Red Temperature is abnormal. Voltage light: Green voltage is normal. Red voltage is abnormal.
32 Mute alarm beeper. Tips I f t h e s t a t u s i n d i c a t o r s i n I n t e r n e t E x p l o r e r ( I E ) a r e d i s p l a y e d i n g ray, but not in blinking re d, please enable “Internet Options” “Advanced” “Play animations in webpages” option s in IE.
33 Figure 3.2.1.2 Step2: Confirm the management port IP a ddress and DNS, and then click “Next” . Figure 3.2.1.3 Step 3: Set up the data port IP a nd click “Next” .
34 Figure 3.2.1.4 Step 4: Set up the RAID level and volume size and click “Next” . Figure 3.2.1.5 Step 5: Check all items, and clic k “Finish” .
35 Figure 3.2.1.6 Step 6: Done. 3.2.2 Volume creation wizard “Volume create wizard” has a smarter policy. When the system is inserted with some HDDs. “Volume create wizard” lists all possibilitie s and sizes in differen t RAID levels, it will use all available HDDs for RAID level depends on which user choose s.
36 Figure 3.2.2.1 Step 2: Please select the combination of the RG capacity, or “Use default algorithm” for maximum RG capacity. Afte r RG size is c hosen, click “Next” .
37 Step 3: Decide VD size. User can enter a numbe r less or e qual to the default number . Then click “Next” . Figure 3.2.2.3 Step 4: Confirmation page . Click “Finish” if all setups are corr ect. Then a VD will be created. Step 5: Done. The system is available now.
38 Chapter 4 Configuration 4.1 Web UI management interface hierarchy The below table is the hierarchy of web GUI. System configuration System setting System name / Date and time / System i ndicati.
39 Maintenance System information System information Event log Download / Mute / Clear Upgrade Browse the firmware to upgrade Firmware synchronization Synchronize the slave controller.
40 Figure 4.2.1.1 Check “Change date an d time” to set up the cu rrent date, time, an d time zone before using or synchronize time from NTP (N etwork T ime Protocol) server. Click “Confirm” in System indication to turn on the system i ndication LED.
41 Figure 4.2.2.1 4.2.3 Login setting “Login setting” can set single admin, auto logou t time and ad min / user password. The single admin is to prevent multiple users access the same system in the same time. 1. Auto logout: The options are (1 ) Disabled; (2) 5 minutes; (3) 30 minutes; (4) 1 hour.
42 Figure 4.2.3.1 Check “Change admin password” or “Change user password” t o c h a n g e a d m i n o r user password. The maximum length of password is 12 characters. 4.2.4 Mail setting “Mail setting” can enter 3 mail addresses for receivi ng the event notifi cation.
43 Figure 4.2.4.1 4.2.5 Notification setting “Notification setting” can set up SNMP trap for alerti ng via SNMP, pop-up message via Windows messenger (not MSN), alert via syslog protocol, and event log filter for web UI and LCM notifications.
44 Figure 4.2.5.1 “SNMP” a l l o w s u p t o 3 S N M P t r a p a d d r e s s e s . D e fault community setting is “public”. User can choose the event log leve ls and default setting enab les ERROR and WARNING event log in SNMP. There are many SNMP tools.
45 Most UNIX systems build in syslog daemon. “Event log filter” setting can enable even t log display on “Pop up events” and “LCM”. 4.3 iSCSI configuration “iSCSI configuration” is designed for setting up the “Entity Property” , “NIC” , “Node” , “Session” , and “CHAP account” .
46 Figure 4.3.1.2 Default gateway: Default gateway can be changed by checkin g the gray button of LAN port, click “Become default gateway” . There can be only one default gat eway. MTU / Jumbo frame: MTU ( M aximum T ransmission U nit) size can be enabled by checking the gray button of LAN port, click “Enable jumbo f rame” .
47 LACP packets to the peer. Th e advantages of LACP are (1 ) increases the bandwidth. (2) failover when link st atus fails on a port. Trunking / LACP setting can be change d by clicking the button “Aggregation” . Figure 4.3.1.3 (Figure 4.3.1.3: There are 2 iSCSI data ports on ea ch controller, select at least two NICs for link aggregation.
48 Figure 4.3.1.5 (Figure 4.3.1.5 shows a user can ping host from the target to make sure the data port connection is well.) 4.3.2 Entity property “Entity property” can view the entity name of the system, and setup “iSNS IP” for iSNS (Internet Storage Name Service).
49 Figure 4.3.3.1 CHAP: CHAP is the abbreviation of C hallenge H andshake A uthentication P rotoc ol. CHAP is a strong authentication method used in point-to -point for user l ogin. It’s a type of authentication in which the au thentication server sends the client a key to be used for encrypting the username and password.
50 Figure 4.3.3.3 5. Go to “/ iSCSI configuration / CHAP account” page to create CHAP account. Please refer to next section for more detail. 6. Check the gray button of “OP.” column, click “User” . 7. Select CHAP user(s) which will be used.
51 Rename alias: User can create an alias to one device node. 1. Check the gray button of “OP.” column next to one device node. 2. Select “Rename alias” . 3. Create an alias for that device node. 4. Click “OK” to confirm. 5. An alias appears at the end of that device node.
52 8. DataSeginOrder(Data Sequence in Order) 9. DataPDUInOrder(Dat a PDU in Order) 10. Detail of Authentication status and Source IP: port number. Figure 4.3.4.1 (Figure 4.3.4.1: iSCSI Session.) Check the gray button of session number, click “List connection” .
53 Figure 4.3.5.1 3. Click “OK” . Figure 4.3.5.2 4. Click “Delete” to delete CHAP account. 4.4 Volume configuration “Volume configuration” is designed for setting up the volume configuratio n which includes “Physical disk” , “RAID group” , “Virtual disk” , “Snapshot” , “Logical unit” , and “Replication” .
54 4.4.1 Physical disk “Physical disk” can view the status of hard drives in the sy stem. The followings are operational steps: 1. Check the gray button next to the number of slot, it will show the functions which can be executed. 2. Active function can be sel ected, and inactive functions show up in gray c olor and cannot be selected.
55 Figure 4.4.1.3 (Figure 4.4.1.3: Physical disks in slot 1,2,3 are created for a RG named “RG-R5”. Slot 4 is set as dedicated spare disk of the RG named “RG-R5”. The others are free d isks.) Step 4: The unit of size can be chan ged from (GB) to (MB).
56 “Failed” the hard drive is failed. “Error Alert” S.M.A.R.T. error alert. “Read Errors” the hard drive has unrecoverable read errors. Usage The usage of hard drive: “RAID disk” This hard drive has b een s et to a RAID group.
57 Set Dedicated spares Set a hard drive to dedicate d spare of the selected RG. Upgrade Upgrade hard drive firmware. Disk Scrub Scrub the hard drive. Turn on/off the indication LED Turn on the indication LED of the har d drive. Click again to turn off.
58 Step 2: Confirm page. Click “OK” if all setups are correct. Figure 4.4.2.2 (Figure 4.4.2.2: There is a RAID 0 with 4 ph ysical disks, named “RG-R0”. The second RAID group is a RAID 5 with 3 physical disks, named “RG-R5”.) Step 3: Done. View “RAID group” page.
59 Health The health of RAID gr oup: “Good” the RAID group is good. “Failed” the RAID group fa ils. “Degraded” the RAID gro up is not healthy and not completed. The reason could be l a c k o f d i s k ( s ) o r h a v e failed disk RAID The RAID level of the RAID group.
60 property Write cache: “Enabled” Enable disk writ e cache. (Default) “Disabled” Disable disk write cache. Standby: “Disabled” Disable auto spin -down. (Default) “30 sec / 1 min / 5 min / 30 min” Enabl e hard drive auto spin-down to save power when no access after c ertain period of time.
61 Figure 4.4.3.1 Caution If shutdow n or reboot the system when creatin g VD, the erase process will stop. Step 2: Confirm page. Click “OK” if all setups are correct. Figure 4.4.3.2 (Figure 4.4.3.2: Create a VD named “VD-01”, from “RG-R0”.
62 VD column description: The button includes the func tions which can be executed. Name Virtual disk name. Size (GB) (MB) Total capacity of the virtual disk . The unit can be displayed in GB or MB. Write The right of virtual disk: “WT” W rite T hrough.
63 Clone The target name of virtual disk. Schedule The clone schedule of virtual disk: Health The health of virtual disk: “Optimal” the virtual disk is wo rking well and there is no failed disk in the RG. “Degraded” At le a st one di sk fr om th e R G o f t he Vi rtu al disk is failed or plugged out.
64 / … / 100. Delete Delete the virtual disk. Set property Change the VD name, right, prio rity, bg rate and read ahead. Right: “WT” W rite T hrough. “WB” W rite B ack. (Default) “RO” R ead O nly. Priority: “HI” HI gh priority.
65 Stop clone Stop clone function. Schedule clone Set clone function by schedule. Set snapshot space Set snapshot space for taking sn apshot. Please refer to next chapter for more detail. Cleanup snapshot Clean all snapshots of a VD an d release the sn apshot space.
66 Figure 4.4.4.2 (Figure 4.4.4.2: “VD-01” snapshot space has been created, snap shot space is 15GB, and us ed 1GB for saving snapshot index.) Step 3: Take a snapsh ot. In “/ Volume configuration / Snapshot” , click “Take snapshot” . It will link to next pa ge.
67 Step 5: Attach a LUN t o a snapshot VD. Pl ease re fer to the next sect ion for attaching a LUN. Step 6: Done. Snapshot VD can be used . Snapshot column description: The button includes the func tions which can be executed. Name Snapshot VD name.
68 Delete Delete the snapshot VD. Attach Attach a LUN. Detach Detach a LUN. List LUN List attached LUN(s) . 4.4.5 Logical unit “Logical unit” can view, create, and modify the status of attached logical unit number(s) of each VD. User can attach LU N by clicking the “Attach” .
69 LUN operation description: Attach Attach a logical unit number to a virtual disk. Detach Detach a logical unit numb er from a virtual disk. The matching rules of access co ntrol are followed from the LU N’ created time; the earlier created LUN is prior to the matching rules.
70 Figure 4.4.6.1 1. Select “/ Volume configuration / RAID group” . 2. Click “Create“ . 3. Input a RG Name, choose a RAID level from the list, click “Select PD“ to choose the RAID physical disks, then click “OK“ . 4. Check the setting.
71 Figure 4.4.6.3 1. Select “/ Volume configuration / Virtual disk” . 2. Click “Create” . 3. Input a VD name, choose a RG Name and ente r a size for this VD; decide the stripe height, block size, read / write mode, bg rate, and set prio rity, finally click “OK” .
72 Figure 4.4.6.5 1. Select a VD. 2. Input “Host” IQN, which is a n iSCSI node name fo r access control, or fill-in wildcard “*”, which means every host can access to th is volume. Choose LUN and permission, and then click “OK” . 3. Done. Figure 4.
73 Figure 4.4.6.7 (Figure 4.4.6.7: Slot 4 is set as a global spare disk.) Step 5: Done. Delete VDs, RG, please follow the below ste ps. Step 6: Detach a LUN from the VD. In “/ Volume configuration / Logical unit” , Figure 4.4.6.8 1. Check the gray button ne x t t o t h e L U N ; c l i c k “Detach” .
74 To delete a RAID group, pl ease follow the procedures: 1. Select “/ Volume configuration / RAID group” . 2. Select a RG which all its VD are deleted, otherwise the this RG cannot be deleted. 3. Check the gray button next to the RG number click “Delete” .
75 4.5.1 Hardware mo nitor “Hardware monitor” can view the informat ion of current voltages and temperatures. Figure 4.5.1.1.
76 If “Auto shutdown” is checked, the system will shutdo wn automatically when voltage or temperature is out of the no rmal range. For better data protection, please check “Auto Shutdown” .
77 Figure 4.5.2.2 (Figure 4.5.2.2: With Smart-UPS.) UPS column description: UPS Type Select UPS Type. Choose Smart- UPS for APC, None for other vendors or no UPS. Shutdown Battery Level (%) When below the setting level, system will shutdown. Setting level to “0” will disabl e UPS.
78 Battery Level (%) Current power percentage of battery level. 4.5.3 SES SES represents S CSI E nclosure S ervices, one of the enclosur e management standards. “SES configuration” can enable or disable the management of SES. Figure 4.5.3.1 (Figure 4.
79 Figure 4.5.4.1 (SAS drives & SATA drives) 4.6 System maintenance “Maintenance” allows the operations of sy stem functions which includ e “System information” to show the system version .
80 Status description: Normal Dual controllers are in normal stage. Degraded One contro ller fails or has been plugged out. Lockdown The firmware of two controllers is different or the size of memory of two controllers is different. Single Single controller mode.
81 The event log is displaye d in reverse order which means the latest event log is on the firs t / top page. The event lo gs are actually saved in the fi rst four hard drives; e ach hard drive has one copy of event log.
82 4.6.3 Upgrade “Upgrade” can upgrade controller fi rmware, JBOD firmware, change operation mode, and activate Replication license. Figure 4.6.3.1 Please prepare new controll er firmware file named “xxxx.bin” in local hard drive, then click “Browse” to select the file.
83 master ones no matter what the firmware versio n of slave controller is newer or older than master. In normal status, the firmware versions in controller 1 and 2 are the same as below figure. Figure 4.6.4.1 4.6.5 Reset to factory default “Reset to factory default” allows user to reset IP SAN st orage to factory default setting.
84 1. Import: Import all system configurati ons excluding volume configurati on. 2. Export: Export all configurations to a file. Caution “Import” will import all system confi g urations excludin g volume configuration; the current configurations will be replaced.
85 For security reason, please use “Logout” to exit the web UI. To re-login the system, please enter username and password again. 4.7.3 Mute Click “Mute” to stop the alarm when error occurs.
86 Chapter 5 Advanced operations 5.1 Volume rebuild If one physical disk of the RG whi ch is set as protected RAID level (e .g.: RAID 3, RAID 5, or RAID 6) is FAILE D or has been unplugged / remove d, then t h e s ta tu s of RG is ch an ge d to degraded mode, the system will search/detect spare disk to rebuild the degraded RG t o a complete one.
87 Rebuild operation description: RAID 0 Disk striping. No protection for da t a . R G f a i l s i f a n y h a r d d r i v e fails or unplugs. RAID 1 D is k m i rr o r i n g ov e r 2 d is k s . R AI D 1 a l l ow s o n e ha r d d r iv e f a i ls o r unplugging.
88 5.2 RG migration To migrate the RAID l evel, pl ease follow below procedures. 1. Select “/ Volume configuration / RAID group” . 2. Check the gray button next to the RG number; click “Migrate” . 3. Change the RAID level by c licking the down arrow to “RAID 5” .
89 5.3 VD extension To extend VD size, plea se follow the procedures. 1. Select “/ Volume configuration / Virtual disk” . 2. Check the gray button next to the VD number; click “Extend” . 3. Change the size. The size must be larg er than the original, and then click “OK” to start extension.
90 any unfortunate reason it migh t be (e.g. virus attack, data corruption, huma n errors and so on). Snap VD is allocated within the sa m e R G i n w h i c h t h e s n a p s h o t i s t a k e n , w e suggest to reserve 20% of RG size or more for snapshot space.
91 Figure 5.4.1.1 7. Check the gray button next to the Snapshot VD number; click “Expose” . Enter a capacity for snapshot VD. If size is zero , the exposed snapshot VD is read only. Otherwise, the exposed snapsh o t V D c a n b e r e a d / w r i t t e n , a n d t h e s i z e i s t h e maximum capacity for writing.
92 Figure 5.4.2.1 (Figure 5.4.2.1: It will take snapshots every month, and keep the last 32 snapshot copies.) Tips Daily snapshot will be ta ken at every 00:00. Weekly snapshot will be taken every Sunday 00:00. Monthly snapshot will be take n every first day of month 00:00.
93 5.4.4 Snapshot constr aint D-LINK snapshot function applie s Copy -on-Write technique on UDV/VD and provides a quick and efficient backup meth odology. When taking a snapshot, it does not copy a ny data at first time until a request of data mo dification comes in.
94 On Linux and UNIX platform, a command named sync can be used to make the operating system flush data from write caching into disk. For Windows platform, Microsoft a lso provides a tool – sync , which can do exactly the same thing as the sync command in Linux/UNIX.
95 When a snapshot has b een rollbacked, the othe r snapshots which are earlier than it will also be removed. But the rest snapshots will be kept after rollback . If a snapshot has been deleted, the other snapshots which are earlier than it will also be deleted.
96 Figure 5.6.1 2. Create two virtual disks (VD) “SourceVD_R5” and “TargetVD_R6”. The raid type of backup target needs to be set as “BACKUP” . Figure 5.6.2 3. Here are the objects, a Source VD and a Target VD. Before starting clone process, it needs to deploy the VD Clone rule first.
97 Figure 5.6.4 Snapshot space: Figure 5.6.5 T h i s s e t t i n g i s t h e r a t i o o f s o u r c e V D a n d s n a p s h o t s p a c e . T h e d e f a u l t r a t i o i s 2 t o 1 .
98 Restart the task an hour later if faile d: (The setting will be effective after enabling schedule clone) Figure 5.6.7 When running out of snapshot s pace, the VD clone pro cess will be stopped because there is no more available snapshot space.
99 Figure 5.6.9 7. Now, the clone target “Tar getVD_R6” has been set. Figure 5.6.10 8. Click “Start clone” , the clone process will start. Figure 5.6.11 9. The default setting will create a snapshot space automatically wh ich the capacity is double size of the VD space.
100 Figure 5.6.12 10. After initiating the snapshot sp ace, it will start cloning. Figure 5.6.13 11. Click “Schedule clone” to set up the cl one by schedule. Figure 5.6.14 12. There are “Set Clone schedule” and “Clear Clone schedule” in this page.
101 Figure 5.6.15 Run out of snapshot space while VD clone While the clone is processing, the increment data of this VD is over th e snapshot space. The clone will complete, but the clone snapshot will fail. Next time, when trying to start clone, it will get a w arning message “This is not enough of snapshot space for the operation”.
102 Figure 5.6.16.
103 5.7 SAS JBOD expansion 5.7.1 Connecting JBOD D-LINK controller suports SAS JBOD expansion to connect extra SAS dual JBOD controller. When connecting to a dual JB OD which can be detected, it will be displayed in “Show PD for:” of “/ Volume configuration / Physical disk” .
104 Figure 5.7.1.2 Figure 5.7.1.3 “/ Enclosure management / S.M.A.R.T.” can display S.M.A. R.T. information of all PDs , including Local an d all SAS JBODs. Figure 5.7.1.4 (Figure 5.7.1.4: Disk S.M.A.R.T. information of JB OD 1, alt hough S.M.A.R.
105 SAS JBOD expansion has some constrai nts as described in the followings: 1 User could create RAID group among multiple chassis, max numb er of disks in a single RAID group is 32. 2 Global spare disk can support all RAID groups which locate d in the different chassis.
106 5.8 MPIO and MC/S These features come from iSCSi initiator. They ca n be setup from iSCSI initiator to establish redundant paths f or sending I/ O from the initiator to the target.
107 Figure 5.8.2 Difference: MC/S is implemented on iSCSI level, while MPIO is implemented on the higher level. Hence, all MPIO infrastructures are shared among all SCSI tr ansports, including Fiber Channel, SAS, etc. MPIO is the most common usage across all OS vendor s.
108 5.9 Trunking and LACP Link aggregation is the techni que of taking several distinct Ethernet links to let them appear as a single link. It ha s a larger bandwidth and pro vid es the fault tolerance ability. Beside the advantage of wide bandwidt h, the I/O traffic remains op erating until all physical links fail.
109 Figure 5.9.2 Caution Before usin g trunkin g or LACP, he g i g abit switch must support trunk in g or LACP and enabled. Otherwis e, host can not connect the li nk with stor a g e device. 5.10 Dual controllers (only for DSN-6420) 5.10.1 Perform I/O Please refer to the following topology and have all the conn ections ready.
110 Figure 5.10.1.1 5.10.2 Ownership When creating RG, it will be assigned with a prefered owner, the default owner is controller 1. To change the RG owne rship, please fo llo w the procedures. 1 Select “/ Volume configuration / RAID group” . 2 Check the gray button next to the RG name; click “Set preferred owner” .
111 Figure 5.10.2.2 (Figure 5.10.2.2: The RG ownership is changed to the other controller.) 5.10.3 Controller status There are four statuses described on the following. It can be found in “/ System maintenance / System information” . 1. Normal: Dual controller mode.
112 5.11 Replication Replication function will help users to replicate data easily through LAN or WAN from one IP SAN storage to another. The procedures of Replicat ion are on the following: 1. Copy all data from source VD to t arg et VD at the begi nning (ful l copy).
113 3. If you want the replication port to be on s pec ial V LAN s ect ion, you may assign VLAN ID to the replication port. The setting will auto matically duplicate to the other c ontroller. Create backup virtual disk on the target IP SAN storage 1.
114 Figure 5.11.4 Create replication job on the source IP SAN storage 1. If the license key is activated on the IP SAN storag e correctly, a new Replication tab will be added on the Web UI. Click “Create” to create a new replication job. Figure 5.
115 Figure 5.11.7 4. The Replication uses standard iSCSI protocol for data replication. User has to log o n the iSCSI node to create th e iSCSI connection for the da ta transmission. Enter the CHAP information if necessary and select the target node to log no.
116 Figure 5.11.9 6. A new replication job is created and listed on the Replication page. Figure 5.11.10 Run the replication job 1. Click the “OP.” button on the replication jo b to open operation menu. Click “Start” to run the replication job.
117 Figure 5.11.12 3. User can monitor the replication job from the “Status” information and the progress is expressed by percentage. Figure 5.11.13 Create multi-path on the replication job 1. Click the “Create multi-path” in the operation menu of t he replication job.
118 Figure 5.11.15 3. Select the iSCSI node to log on and click “Next” . Figure 5.11.16 4. Choose the same target virtual disk and click “Next” .
119 Figure 5.11.17 5. A new target will be added in this re plication job as a redundancy path. Figure 5.11.18 Configure the replication job to run by schedule 1.
120 2. The replication job can be scheduled to run by hour , by day, by week or by month. The execution time can be co nfigurable per user’s need. If the scheduled time of execution is arrived but the pervious repl ication job is stilling going, the scheduled execution will be ignored once.
121 Figure 5.11.21 There are three settings in the Replication configuration menu, Figure 5.11.22 “Snapshot space” specifies the ratio of snapshot space allocate d to the source virtual disk automatically when the snap shot space is not configured in advance.
122 5.12 VLAN VLAN (Virtual Local Area Network) is a lo gical grouping mechanism implemented on switch device using software rather than a hardware solution. VLAN s are collections of switching ports that comp rise a single broadcast domain. It allows network traffic to flow more efficiently within these logical subgrou ps.
123 Figure 5.12.2 4. VLAN ID 66 for LAN2 is set properly. Figure 5.12.3 Assign VLAN ID to LAG(Trunking or LACP) 1. After creating LAG, press “OP” button next to the LAG, and sel ect “Set VLAN ID”. Figure 5.12.4 2. Put in the VLAN ID and click ok.
124 Figure 5.12.5 3. If iSCSI ports are assigned with VLAN ID before creati ng aggregation takes place , aggregation will remove VLAN ID. You need to repeat step 1 and st ep 2 to set VLAN ID for the aggregation group. Assign VLAN ID to replication port Please consult figure 5.
125 Chapter 6 Troubleshooting 6.1 System buzzer The system buzzer features are listed below: 1. The system buzzer alarms 1 second when system boots up successfully. 2. The system buzzer alarms conti nuously wh en there is error o ccurred. The alarm will be stopped after error resolved or be muted.
126 ERROR SATA PRD mem fail Failed to init SATA PRD memory manager ERROR SATA revision id fail Failed to get SATA revi sion id ERROR SATA set reg fail Failed to set SATA register ERROR SATA init fail .
127 RMS events Level Type Description INFO Console Login <username> login from <IP or serial console> via Co nsole UI INFO Console Logout <username> logout from <IP or serial .
128 ERROR VD move failed Failed to complete move of VD <name>. INFO RG activated RG <name> has been manually activated. INFO RG deactivated RG <name> ha s been manually deactivated. INFO VD rewrite started Rewrite at LBA <address> of VD <name> starts.
129 INFO VD erase started VD <name> starts erasing process. Snapshot events Level Type Description WARNING Snap mem Failed to allocate sn apshot memory for VD <name>. WARNING Snap space overflow Failed to allocate snapshot space f or VD <name>.
130 INFO PD upgrade started JBOD <name> PD [<string>] starts up grading firmware process. INFO PD upgrade finished JBOD <name> PD [<string>] finished upgrading firmware process. WARNING PD upgrade failed JBOD <name> PD [<string>] upgrade firmware failed.
131 System maintenance events Level Type Description INFO System shutdown System shutdown. INFO System reboot System reboot. INFO System console shutdown System shutdown from <string> via Co.
132 Level Type Description INFO VD clone started VD <name> starts cloning process. INFO VD clone finished VD <name> finished cloning process. WARNING VD clone failed The cloning in VD <name> failed. INFO VD clone aborted The cloning in VD <name> was aborted.
133 Appendix A. Certification list iSCSI Initiator (Software) OS Software/Release Number Microsoft Windows Microsoft iSCSI Software Initiator Release v2.08 System Requirements: 1. Windows 2000 Server with SP4 2. Windows Server 2003 with SP2 3. Windows Server 2008 with SP2 Linux The iSCSI Initiators are different for different Linux Kernels.
134 D-Link All D-Link Mana ged Gigabit Switches Avago AFBR-703SDZ (10 Gb/s SFP transceiver, 850nm) Finisar FTLX8571D3BCV (10 Gb/s SFP tran sceiver, 850nm) 10GbE Switch Vendor Model Dell PowerConne.
135 Vendor Model Hitachi Deskstar 7K250, HDS722580VLSA80, 80GB, 7200RPM, SA TA , 8M Hitachi Deskstar E7K500, HDS725050K LA360, 500GB, 7200RPM, SATA II, 16M Hitachi Deskstar 7K80, HDS728040PLA320, 40GB.
136 Vendor Model Seagate Constellation, ST9500530N S, 500GB, 7200RPM, SATA 3.0Gb/s, 32M (F/W: SN02) B. Microsoft iSCSI initiator H e r e i s t h e s t e p b y s t e p t o setup Microsoft iSCSI Initiator. Please visit Microsoft website for latest iSCSI initiator.
137 Figure B.2 Fi gure B.3 4. It can connect to an iSCSI disk now. MPIO 5. If running MPIO, please continue. 6. Click “Discovery” tab to connect the second path.
138 Figure B.4 Fi gure B.5 8. Click “OK” . Figure B.6 Fi gure B.7.
139 9. Click “Targets” tab, select the second path, and then cl ick “Connect” . 10. Enable “Enable multi-path” checkbox. Then click “OK” . 11. Done, it can connect to an iSCSI disk with MP IO. MC/S 12. If running MC/S, please continue.
140 Figure B.10 Fi gure B.11 17. Select Initiator IP and Target portal IP, a nd then click “OK” . 18. Click “Connect” . 19. Click “OK” . Figure B.
141 Disconnect 21. Select the target name, click “Disconnect” , and then click “Yes” . Figure B.14 22. Done, the iSCSI device disconne ct successfully.
142 C. From single controller to dual controllers This SOP applies to upgrad ing from DSN-6110 to DSN-6120 as well as from DSN-6410 to DSN-6420. Before you do this , please make sure that either DSN-6110 or DSN-6410 is properly installed according to the manual s, especially the H DD trays.
143 Please follow the steps below to upgrade to dual contr oller mode. Step 1 Go to “MaintenanceSystem”. Copy the IP SAN storage serial num ber. Step 2 Go to “MaintenanceUpgrade” and paste the serial number into “Controller Mode” section. Select “Dual” as operation mode.
144 Step 3 Click “confirm”. The system w ill ask you to shutdown. Please shut down IP SAN storage. Click Ok..
145 Go to “MaintenanceReboot an d shutdown”. Click “Shut down ” to shutdown the system. Click Ok..
146 Step 4 Power off DSN-6110 or DSN-6410. In sert the second controller to the IP SAN storage. And then power on the system. Th e IP SAN storage should now become in dual controller mode as either DSN-6120 or DSN-6420. You may go to “MaintenanceSy stem information” to check out.
Un punto importante, dopo l’acquisto del dispositivo (o anche prima di acquisto) è quello di leggere il manuale. Dobbiamo farlo per diversi motivi semplici:
Se non hai ancora comprato il D-Link DSN-6420 è un buon momento per familiarizzare con i dati di base del prodotto. Prime consultare le pagine iniziali del manuale d’uso, che si trova al di sopra. Dovresti trovare lì i dati tecnici più importanti del D-Link DSN-6420 - in questo modo è possibile verificare se l’apparecchio soddisfa le tue esigenze. Esplorando le pagine segenti del manuali d’uso D-Link DSN-6420 imparerai tutte le caratteristiche del prodotto e le informazioni sul suo funzionamento. Le informazioni sul D-Link DSN-6420 ti aiuteranno sicuramente a prendere una decisione relativa all’acquisto.
In una situazione in cui hai già il D-Link DSN-6420, ma non hai ancora letto il manuale d’uso, dovresti farlo per le ragioni sopra descritte. Saprai quindi se hai correttamente usato le funzioni disponibili, e se hai commesso errori che possono ridurre la durata di vita del D-Link DSN-6420.
Tuttavia, uno dei ruoli più importanti per l’utente svolti dal manuale d’uso è quello di aiutare a risolvere i problemi con il D-Link DSN-6420. Quasi sempre, ci troverai Troubleshooting, cioè i guasti più frequenti e malfunzionamenti del dispositivo D-Link DSN-6420 insieme con le istruzioni su come risolverli. Anche se non si riesci a risolvere il problema, il manuale d’uso ti mostrerà il percorso di ulteriori procedimenti – il contatto con il centro servizio clienti o il servizio più vicino.