6

在我基于 play-framework 的 Web 应用程序中,用户可以下载 csv 或 json 格式的不同数据库表的所有行。表相对较大(100k+ 行),我正在尝试使用 Play 2.2 中的分块流回结果。

然而问题是尽管 println 语句显示行被写入 Chunks.Out 对象,但它们并没有出现在客户端!如果我限制发回的行,它会起作用,但是如果我尝试发回所有行并导致超时或服务器内存不足,它也会在开始时有很大的延迟。

我使用 Ebean ORM 并且表被索引并且从 psql 查询不需要太多时间。有谁知道可能是什么问题?

非常感谢您的帮助!

这是其中一个控制器的代码:

@SecureSocial.UserAwareAction
public static Result showEpex() {

    User user = getUser();
    if(user == null || user.getRole() == null)
        return ok(views.html.profile.render(user, Application.NOT_CONFIRMED_MSG));

    DynamicForm form = DynamicForm.form().bindFromRequest();
    final UserRequest req = UserRequest.getRequest(form);

    if(req.getFormat().equalsIgnoreCase("html")) {
        Page<EpexEntry> page = EpexEntry.page(req.getStart(), req.getFinish(), req.getPage());
        return ok(views.html.epex.render(page, req));
    }

    // otherwise chunk result and send back
    final ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();
    Chunks<String> chunks = new StringChunks() {
            @Override
            public void onReady(play.mvc.Results.Chunks.Out<String> out) {

                Page<EpexEntry> page = EpexEntry.page(req.getStart(), req.getFinish(), 0);
                ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();
                streamer.stream(out, page, req);
            }
    };
    return ok(chunks).as("text/plain");
}

还有流光:

public class ResultStreamer<T extends Entry> {

private static ALogger logger = Logger.of(ResultStreamer.class);

public void stream(Out<String> out, Page<T> page, UserRequest req) {

    if(req.getFormat().equalsIgnoreCase("json")) {
        JsonContext context = Ebean.createJsonContext();
        out.write("[\n");
        for(T e: page.getList())
            out.write(context.toJsonString(e) + ", ");
        while(page.hasNext()) {
            page = page.next();
            for(T e: page.getList())
                out.write(context.toJsonString(e) + ", ");
        }
        out.write("]\n");
        out.close();
    } else if(req.getFormat().equalsIgnoreCase("csv")) {
        for(T e: page.getList())
            out.write(e.toCsv(CSV_SEPARATOR) + "\n");
        while(page.hasNext()) {
            page = page.next();
            for(T e: page.getList())
                out.write(e.toCsv(CSV_SEPARATOR) + "\n");
        }
        out.close();
    }else {
        out.write("Invalid format! Only CSV, JSON and HTML can be generated!");
        out.close();
    }
}


public static final String CSV_SEPARATOR = ";";
} 

和模型:

@Entity
@Table(name="epex")
public class EpexEntry extends Model implements Entry {

    @Id
    @Column(columnDefinition = "pg-uuid")
    private UUID id;
    private DateTime start;
    private DateTime finish;
    private String contract;
    private String market;
    private Double low;
    private Double high;
    private Double last;
    @Column(name="weight_avg")
    private Double weightAverage;
    private Double index;
    private Double buyVol;
    private Double sellVol;

    private static final String START_COL = "start";
    private static final String FINISH_COL = "finish";
    private static final String CONTRACT_COL = "contract";
    private static final String MARKET_COL = "market";
    private static final String ORDER_BY = MARKET_COL + "," + CONTRACT_COL + "," + START_COL;

    public static final int PAGE_SIZE = 100;

    public static final String HOURLY_CONTRACT = "hourly";
    public static final String MIN15_CONTRACT = "15min";

    public static final String FRANCE_MARKET = "france";
    public static final String GER_AUS_MARKET = "germany/austria";
    public static final String SWISS_MARKET = "switzerland";

    public static Finder<UUID, EpexEntry> find = 
            new Finder(UUID.class, EpexEntry.class);

    public EpexEntry() {
    }

    public EpexEntry(UUID id, DateTime start, DateTime finish, String contract,
            String market, Double low, Double high, Double last,
            Double weightAverage, Double index, Double buyVol, Double sellVol) {
        this.id = id;
        this.start = start;
        this.finish = finish;
        this.contract = contract;
        this.market = market;
        this.low = low;
        this.high = high;
        this.last = last;
        this.weightAverage = weightAverage;
        this.index = index;
        this.buyVol = buyVol;
        this.sellVol = sellVol;
    }

    public static Page<EpexEntry> page(DateTime from, DateTime to, int page) {

        if(from == null && to == null)
            return find.order(ORDER_BY).findPagingList(PAGE_SIZE).getPage(page);
        ExpressionList<EpexEntry> exp = find.where();
        if(from != null)
            exp = exp.ge(START_COL, from);
        if(to != null)
            exp = exp.le(FINISH_COL, to.plusHours(24));
        return exp.order(ORDER_BY).findPagingList(PAGE_SIZE).getPage(page);
    }

    @Override
    public String toCsv(String s) {
        return id + s + start + s + finish + s + contract + 
                s + market + s + low + s + high + s + 
                last + s + weightAverage + s + 
                index + s + buyVol + s + sellVol;   
    }
4

1 回答 1

3

1.大多数浏览器在显示任何结果之前会等待 1-5 kb 的数据。您可以检查 Play Framework 是否真的使用 command 发送数据curl http://localhost:9000

2.你创建流光两次,先删除final ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();

3. - 您使用Page类来检索大型数据集 - 这是不正确的。实际上你做了一个大的初始请求,然后每次迭代一个请求。这很慢。使用简单的 findIterate()。

将此添加到EpexEntry(根据需要随意更改)

public static QueryIterator<EpexEntry> all() {
    return find.order(ORDER_BY).findIterate();
}

您的新流方法实现:

public void stream(Out<String> out, QueryIterator<T> iterator, UserRequest req) {

    if(req.getFormat().equalsIgnoreCase("json")) {
        JsonContext context = Ebean.createJsonContext();
        out.write("[\n");
        while (iterator.hasNext()) {
            out.write(context.toJsonString(iterator.next()) + ", ");
        }
        iterator.close(); // its important to close iterator
        out.write("]\n");
        out.close();
    } else // csv implementation here

还有你的 onReady 方法:

            QueryIterator<EpexEntry> iterator = EpexEntry.all();
            ResultStreamer<EpexEntry> streamer = new ResultStreamer<EpexEntry>();
            streamer.stream(new BuffOut(out, 10000), iterator, req); // notice buffering here

4.另一个问题是——你打电话Out<String>.write()太频繁了。调用write()意味着服务器需要立即向客户端发送新的数据块。每次调用Out<String>.write()都有很大的开销。

出现开销是因为服务器需要将响应包装成块结果 - 每条消息块响应格式为 6-7 个字节。由于您发送小消息,因此开销很大。此外,服务器需要将您的回复包装在 TCP 数据包中,该数据包的大小远非最佳。而且,服务器需要执行一些内部动作来发送一个块,这也需要一些资源。结果,下载带宽将远非最佳。

这是一个简单的测试:将 10000 行文本 TEST0 分块发送到 TEST9999。这在我的计算机上平均需要 3 秒。但是使用缓冲需要 65 毫秒。此外,下载大小为 136 kb 和 87.5 kb。

缓冲示例:

控制器

public class Application extends Controller {
    public static Result showEpex() {
        Chunks<String> chunks = new StringChunks() {
            @Override
            public void onReady(play.mvc.Results.Chunks.Out<String> out) {
                new ResultStreamer().stream(out);
            }
        };
        return ok(chunks).as("text/plain");
    }
}

新的 BuffOut 类。这很愚蠢,我知道

public class BuffOut {
    private StringBuilder sb;
    private Out<String> dst;

    public BuffOut(Out<String> dst, int bufSize) {
        this.dst = dst;
        this.sb = new StringBuilder(bufSize);
    }

    public void write(String data) {
        if ((sb.length() + data.length()) > sb.capacity()) {
            dst.write(sb.toString());
            sb.setLength(0);
        }
        sb.append(data);
    }

    public void close() {
        if (sb.length() > 0)
            dst.write(sb.toString());
        dst.close();
    }
}

这个实现有 3 秒的下载时间和 136 kb 的大小

public class ResultStreamer {
    public void stream(Out<String> out) {
    for (int i = 0; i < 10000; i++) {
            out.write("TEST" + i + "\n");
        }
        out.close();
    }
}

这个实现有 65 ms 的下载时间和 87.5 kb 的大小

public class ResultStreamer {
    public void stream(Out<String> out) {
        BuffOut out2 = new BuffOut(out, 1000);
        for (int i = 0; i < 10000; i++) {
            out2.write("TEST" + i + "\n");
        }
        out2.close();
    }
}
于 2013-10-19T18:58:13.287 回答