[cdwg] [HPDD-discuss] Lustre 2.6.0 released

E.S. Rosenberg esr+hpdd-discuss at mail.hebrew.edu
Sun Aug 3 06:49:03 PDT 2014


What does this mean for 2.4.x and 2.5.x?

Originally 2.4.x was supposed to be the version that would be supported for
a long persiod, then 2.5.x became that version because of HFS as far as I
understand.

Thanks,
Eli


On Thu, Jul 31, 2014 at 4:03 AM, Jones, Peter A <peter.a.jones at intel.com>
wrote:

> We are pleased to announce that the Lustre 2.6.0 Release has been declared
> GA and is available for download<
> https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/> .
> You can also grab the source from git<
> http://git.whamcloud.com/fs/lustre-release.git/commit/73ea776053d99f74a9f5679fe55ec5d9461b8a89
> >
>
> This major release includes new features:
>
> MDT-OST Consistency Check and Repair (LFSCK Phase 2)- Allows the MDS to
> verify the consistency of a Lustre filesystem while it is mounted and in
> use. The latest enhancements are to check and repair the validity of OST
> objects of regular files and to identify and optionally remove or link into
> lost+found OST objects that are not referenced by any files on the MDS.
> This development is funded by OpenSFS (LU-1267<
> https://jira.hpdd.intel.com/browse/LU-1267>)
> Single Client Performance Improvements – Single thread per process IO
> performance has been improved. This work was discussed in details at LUG<
> http://cdn.opensfs.org/wp-content/uploads/2014/04/D1_S6_LustreClientIOPerformanceImprovements.pdf>
>  (LU-3321<https://jira.hpdd.intel.com/browse/LU-3321>)
>
> Striped Directories -Enables a single directory to be striped across
> multiple MDTs to improve single directory performance and scalability. This
> is a technology preview of part of the DNE phase 2 work funded by OpenSFS
> that will be fully available in a future Lustre release (LU-3531<
> https://jira.hpdd.intel.com/browse/LU-3531>)
>
> Fuller details can be found in the change log<
> https://wiki.hpdd.intel.com/display/PUB/Changelog+2.6>, the scope
> statement<
> https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6+Scope+Statement> <
> https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6+Scope+Statement> and
> the test mat<https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6>rix<
> https://wiki.hpdd.intel.com/display/PUB/Lustre+2.6>
> The following are known issues in the Lustre 2.6 Release:
>
> LU-5057<https://jira.hpdd.intel.com/browse/LU-5057> - A rare race
> condition can lead to an LASSERT when unmounting an OST.
> LU-5150 <https://jira.hpdd.intel.com/browse/LU-5150>  - Empty access
> control lists (ACLs) will be stored for copied files when using a ZFS MDS.
> It does not affect ldiskfs MDSes.
> LU-4367 -<https://jira.hpdd.intel.com/browse/LU-4367> Metadata
> performance is affected for unlinking files in a single shared directory in
> a manner common with mdtest benchmark pattern.
> LU-5420<https://jira.hpdd.intel.com/browse/LU-5420> - DNE configurations
> with multiple MDTs sharing a single node with an MGS may hang during MDT
> mount or fail to mount an MDT after unclean shutdown.
> Work is in progress for these issues.
>
> NOTE: Users should note that usage of the e2fsprogs-based lfsck has been
> deprecated and replaced by "lctl lfsck_start”. Using older e2fsprogs-based
> lfsck may lead to filesystem corruption. Once available, it is also
> recommended to use e2fsprogs-1.42.11.wc2 (or newer).
> Please log any issues found in the issue tracking system<
> https://jira.hpdd.intel.com/>
>
> We would like to thank OpenSFS<http://www.opensfs.org/>, for their
> contributions towards the cost of the release and also to all Lustre
> community members who have contributed to the release with code and/or
> testing.
>
> _______________________________________________
> HPDD-discuss mailing list
> HPDD-discuss at lists.01.org
> https://lists.01.org/mailman/listinfo/hpdd-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opensfs.org/pipermail/cdwg-opensfs.org/attachments/20140803/d3e24241/attachment.htm>


More information about the cdwg mailing list