我正在寻找一种如何从 .NET C# 程序中读取 DataFlex 6.2 数据文件的方法。我需要对没有特殊功能的表数据进行顺序只读访问,基本上只需解码几个.dat
包含数据的文件并从中生成DataTable
。
我知道有像FlexODBC这样的商业产品,但在我看来,对于这样一个相对简单的任务来说,这有点矫枉过正。也许有人知道免费的替代方案或数据文件结构文档,所以我不需要自己用十六进制编辑器来计算它?
我最近发现了这个链接。用几张桌子检查了一下,它不是 100% 好的,但这是一个很好的 quidance:
DATAFLEX 2.3B DATAFILE HEADER STRUCTURE
By Peter M. Grillo
MAINSTREAM COMPUTER CONSULTING
Following is the structure of the DataFlex .DAT file for 2.3. Data
Access Corporation has deemed the structure of the .DAT file as
proprietary. The following definition of a 2.3 .DAT file was derived
independently by myself and any problem arising from the use of this
information will be your problem. Please do not call DAC and snivel. Use
at own risk. Please do not upload this to DAC's BBS.
DAC has indicated to me that I can release this information providing I
include the prior disclaimer.
All that aside, this is everything I know about a DataFlex .DAT file.
The overall layout of a 2.3 .DAT file is header, null record and data.
The header contains information about the file definition. Just about
everything you define in DFFILE can be found in the header except for
tag names. It is possible to read the header of a 2.3 .DAT file and the
corresponding .TAG file to produce a perfect .DEF file.
The following show offsets into the header:
(LSB = Least significant byte)
(MSBT = Most significant bit)
DECIMAL HEX DESCRIPTION
01 - 04 00 - 03 HIGHEST RECORD COUNT EVER (LSB FIRST)
09 - 12 08 - 0B RECORD COUNT (LSB FIRST)
13 - 16 0C - 0F MAXIMUM NUMBER OF RECORDS (LSB FIRST)
79 - 80 4E - 4F RECORD LENGTH (LSB FIRST)
89 58 DELETED SPACE (1=REUSED, 0=NOT REUSED)
90 59 NUMBER OF FIELDS
93 5C MULTIUSER REREAD (1=ACTIVE, 0=INACTIVE)
101 64 NUMBER OF FIELDS IN INDEX 1 (MSBT SET 1 IF BATCH)
102-108 65 - 6B FIELD SEGMENTS OF INDEX 1
109 6C NUMBER OF FIELDS IN INDEX 2 (MSBT SET 1 IF BATCH)
110-116 6D - 73 FIELD SEGMENTS OF INDEX 2
117 74 NUMBER OF FIELDS IN INDEX 3 (MSBT SET 1 IF BATCH)
118-124 75 - 7B FIELD SEGMENTS OF INDEX 3
125 7C NUMBER OF FIELDS IN INDEX 4 (MSBT SET 1 IF BATCH)
126-132 7D - 83 FIELD SEGMENTS OF INDEX 4
133 84 NUMBER OF FIELDS IN INDEX 5 (MSBT SET 1 IF BATCH)
134-140 85 - 8B FIELD SEGMENTS OF INDEX 5
141 8C NUMBER OF FIELDS IN INDEX 6 (MSBT SET 1 IF BATCH)
142-148 8D - 93 FIELD SEGMENTS OF INDEX 6
149 94 NUMBER OF FIELDS IN INDEX 7 (MSBT SET 1 IF BATCH)
150-156 95 - 9B FIELD SEGMENTS OF INDEX 7
157 9C NUMBER OF FIELDS IN INDEX 8 (MSBT SET 1 IF BATCH)
158-162 9D - A3 FIELD SEGMENTS OF INDEX 8
163 A4 NUMBER OF FIELDS IN INDEX 9 (MSBT SET 1 IF BATCH)
164-170 A5 - AB FIELD SEGMENTS OF INDEX 9
171 AC NUMBER OF FIELDS IN INDEX 10 (MSBT SET 1 IF BATCH)
172-108 AD - B3 FIELD SEGMENTS OF INDEX 10
181 -183 B4 - BC FILE ROOT NAME (NULL TERMINATED)
START OF FIELD DEFINITIONS.
REPEAT FOR EACH FIELD.
197-198 C4 - C5 FIELD OFFSET (LSB FIRST)
199 C6 MSBT=MAIN INDEX, LSBT=(DECIMAL POINTS/2)
200 C7 FIELD LENGTH
201 C8 FIELD TYPE 00=ASCII, 01=NUMERIC, 02=DATE, 03=OVERLAP
202 C9 RELATES TO FILE NUMBER
203-204 CA - CB RELATES TO FIELD NUMBER (LSB FIRST)
...-... .. - .. (REPEAT FOR EACH FIELD)
The null record follows the header and usually contains 00h's. The
number of bytes in the null record corresponds to the record length of
the file. The null record is record number zero.
The data that follows are records in order of record number. The number
of bytes in each record corresponds to the record length. Records are
grouped together by blocks of 512 bytes. Not every record length,
however, divides evenly into 512 so you get the occurrence of fill bytes
or 0FFh's to round out a group of records to 512 bytes. Consider the
following:
Record Length Layout
128 Divides into 512 evenly so no fill
bytes are used
170 Divided by 512 is 3 with a remainder
of 2 so after every 3 records
(starting at record 0) the are 2 fill
bytes (0FFh's)
Here is a table of common record lengths:
Record Length Records in 512 Group Number of Fill Bytes
256 2 0
170 3 2
128 4 0
102 5 2
85 6 2
73 7 1
64 8 0
56 9 8
51 10 2
46 11 6
42 12 8
39 13 5
36 14 8
34 15 2
32 16 0
30 17 2
28 18 8
26 19 18
25 20 12
24 21 8
23 22 6
22 23 6
21 24 8
20 25 12
19 26 18
18 28 8
17 30 2
16 32 0
15 34 2
14 36 8
13 39 5
12 42 8
11 46 6
10 51 2
9 56 8
8 64 0
> [fold] [
> [fold] [
Deleted records are filled with 00h's until reused.
DataFlex .DAT files can be opened from .FLX files using DIRECT_INPUT.
You can then use READ_BLOCK commands to read information.
Reading the FILELIST.CFG file is also much more efficient using
DIRECT_INPUT and READ_BLOCK. The first 128 bytes are fill and each
successive block of 128 bytes is a file in the list. In other words, if
you want file 15 then DIRECT_INPUT 'FILELIST.CFG' and READ_BLOCK off
(15*128) bytes. This would point you to the block for file 15. From
there you can read off bytes to find the Root Name, Description, and
DataFlex Name using the following layout.
> [fold] ]
> [fold] ]
DECIMAL HEX DESCRIPTION
01 - 41 00 - 28 FILE ROOT NAME (NULL TERMINATED)
42 - 74 29 - 49 FILE DESCRIPTION (NULL TERMINATED)
75 - 128 4A - 7F DATAFLEX FILE NAME (NULL TERMINATED)
> [fold] 2
在我正在解码的文件(表版本 3.0。)中,没有记录长度,字段列表从偏移量开始,2E0
记录之间的间隙似乎被填充20
而不是00
. 此外,记录未与 512 对齐,但记录大小增加了 128。零记录从 开始C00
。对齐的记录大小可以计算如下(FileSize - C00)/RecordCount
。但正确的方法是从0x9A
位置读取它uint
。里面有A5
一个字段计数。
至于数据类型:
日期使用 3 个字节以 BCD 格式存储在嵌入式数据库中。它是一个 BCD 数字,表示自最小日期以来的天数。700000
代表1642-09-17
,所以这个数字可以作为我的基础。
数字:数字 510000001 存储15 10 00 00 01
为 ,在十六进制编辑器中可读性很好。
所以这是一个将 DAT 文件解析为 DataTable C# 的代码片段:
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using System.Data;
using System.Linq;
namespace DataFlex
{
/// <summary>
/// Classes for parsing DataFlex DAT files version 3.0
/// </summary>
public enum DFFieldType
{
ASCII = 0,
Numeric = 1,
Date = 2,
Overlap = 3,
Unknown = 4
}
public class DFField
{
public DFFieldType Type;
public Type DataType;
public int Position;
public byte Length;
public decimal Precision;
public string Name;
private Byte[] _input;
public DFField(byte[] input, string name)
{
_input = input;
Name = name;
UInt16 helper = BitConverter.ToUInt16(input, 0);
Position = helper;
helper = (ushort)((input[2] & 0x0F) * 2);
if (helper > 0)
Precision = (decimal)Math.Pow(10, helper);
else
Precision = 0;
Length = input[3];
switch (input[4])
{
case 0: Type = DFFieldType.ASCII; DataType = typeof(string); break;
case 1: Type = DFFieldType.Numeric; DataType = typeof(decimal); break;
case 2: Type = DFFieldType.Date; DataType = typeof(DateTime); break;
case 3: Type = DFFieldType.Overlap; DataType = typeof(object); break;
default: Type = DFFieldType.Unknown; break;
}
}
}
public class DFRow
{
public object[] _values;
public DFTable _DFTable;
public object[] Values { get { return _values; } }
public DFRow(byte[] input, DFTable dFTable)
{
_DFTable = dFTable;
_values = new object[dFTable.Fields.Length];
for (int i = 0; i < dFTable.Fields.Length; i++)
{
var f = dFTable.Fields[i];
object o;
switch (f.Type)
{
case DFFieldType.Date: o = BCDToDate(input, f.Position - 1, f.Length); break;
case DFFieldType.Numeric: o = BCDToDecimal(input, f.Precision, f.Position - 1, f.Length, true); break;
default: o = System.Text.Encoding.GetEncoding("ibm852").GetString(input, f.Position - 1, f.Length).TrimEnd(); break;
}
_values[i] = o;
}
}
private decimal BCDToDecimal(byte[] input, decimal precision, int start, int length, bool signed)
{
decimal result = 0;
uint i = 0;
for (i = 0; i < length; i++)
{
if (i > 0 || !signed)
{
result *= 100;
result += (decimal)(10 * (input[start + i] >> 4));
}
else
{
result *= 10;
}
result += (decimal)(input[start + i] & 0xf);
}
if (precision > 0)
result = (result / precision);
return (result);
}
private DateTime? BCDToDate(byte[] input, int start, int length)
{
DateTime baseDate = new DateTime(1642, 09, 14);
decimal baseNumber = 700000;
decimal dn = BCDToDecimal(input, 0, start, length, false);
dn = dn - baseNumber;
DateTime? result = null;
if (dn > 0)
{
result = baseDate.AddDays((double)dn);
}
return result;
}
}
public class DFTable
{
private long _beginning = 0xC00;
private UInt32 _RecordCount;
private DFField[] _Fields;
private List<DFRow> _Rows;
private UInt16 _RecordLength = 0;
private byte _FieldCount = 0;
private string[] _tags = null;
public DFField[] Fields
{
get { return _Fields; }
}
public List<DFRow> Rows
{
get { return _Rows; }
}
public DFRow LastRecord
{
get { return Rows[Rows.Count-1]; }
}
public DFTable(Stream datStream, bool readLastRecordOnly, string tagFile, string tableName)
{
if (File.Exists(tagFile))
_tags = File.ReadLines(tagFile).ToArray();
//Parsing header
byte[] input = new byte[4];
datStream.Read(input, 0, 4);
_RecordCount = BitConverter.ToUInt32(input, 0);
datStream.Seek(0x9A, SeekOrigin.Begin);
datStream.Read(input, 0, 2);
_RecordLength= BitConverter.ToUInt16(input, 0);
datStream.Seek(0xA5, SeekOrigin.Begin);
datStream.Read(input, 0, 1);
_FieldCount = input[0];
datStream.Seek(0x2E0, SeekOrigin.Begin);
_Fields = new DFField[_FieldCount];
//Parsing structure
int i;
for (i = 0; i < _FieldCount; i++)
{
input = new byte[8];
datStream.Read(input, 0, 8);
string name = _tags == null || _tags.Length<=i ? "F" + i.ToString() : _tags[i];
_Fields[i] = (new DFField(input, name));
}
_beginning = 0xC00 + _RecordLength; //Allways starts at C00
_Rows = new List<DFRow>();
input = new byte[_RecordLength];
if (readLastRecordOnly)
{
for (int idx = 1; idx < _RecordCount; idx++)
{
datStream.Seek(_beginning + (_RecordCount - idx) * _RecordLength, SeekOrigin.Begin); //Set the last record
datStream.Read(input, 0, _RecordLength);
if (input.Any(x => x != 0)) //Not deleted - not all zeroes
{
_Rows.Add(new DFRow(input, this));
break;
}
}
}
else
{
datStream.Seek(_beginning, SeekOrigin.Begin); //Go to beginning
for (int row = 0; row < _RecordCount; row ++)
{
datStream.Read(input, 0, _RecordLength);
if (input.Any(x=>x!=0)) //Not deleted
_Rows.Add(new DFRow(input, this));
}
}
}
/// <summary>
/// Převede na DataTable
/// </summary>
/// <returns></returns>
public DataTable ToDataTable()
{
DataTable dt = new DataTable();
DataColumn dc;
for (int i=0; i< this.Fields.Length; i++)
{
var f = this.Fields[i];
dc = new DataColumn(f.Name, f.DataType );
dt.Columns.Add(dc);
}
//Záznamy od prvního
foreach (var r in this.Rows)
{
DtaRow dr = dt.NewRow();
int j = 0;
foreach (object v in r.Values)
{
dr[j] = v ?? DBNull.Value;
j++;
}
dt.Rows.Add(dr);
}
return dt;
}
/// <summary>
/// https://stackoverflow.com/a/4959869/2224701
/// </summary>
/// <param name="dt"></param>
/// <param name="csvFileName"></param>
public void SaveAsCSV(string csvFileName, bool header)
{
StringBuilder sb = new StringBuilder();
if (header)
{
IEnumerable<string> columnNames = this.Fields.
Select(column => column.Name);
sb.AppendLine(string.Join(",", columnNames));
}
foreach (DFRow row in this.Rows)
{
IEnumerable<string> fields = row.Values.Select(field =>
string.Concat("\"", field!=null ? (field is DateTime ? ((DateTime)field).ToShortDateString() : field.ToString()).Replace("\"", "\"\"") : "", "\""));
sb.AppendLine(string.Join(",", fields));
}
File.WriteAllText(csvFileName, sb.ToString());
}
}
}
用法是这样的:
string fileToRead = @"D:\Table.DAT";
MemoryStream msAla = new MemoryStream(File.ReadAllBytes(fileToRead));
DFTable dft = new DFTable(msAla, false, tagFile, tname);
DataTable dt = dft.ToDataTable();
我不知道有任何开源库可以做到这一点。如果它是“一次性的”,您可能想尝试“ Visual DataPump ”,它可以将您的 VDF 数据库导出到 SQL 数据库中。它不是免费的,但对于小东西,评估版应该可以工作(至少 60 天)。